Previous work has shown that people provide diferent moral judg-ments of robots and humans in the case of moral dilemmas. In particular, robots are blamed more when they fail to intervene in a situation in which they can save multiple lives but must sacrifce one person's life. Previous studies were all conducted with U.S. par-ticipants; the present two experiments provide a careful comparison of moral judgments among Japanese and U.S. participants. The ex-periments assess multiple ways in which cross-cultural diferences in moral evaluations may emerge: in the willingness to treat robots as moral agents; the norms that are imposed on robots' behaviors; and the degree of blame that accrues to them when they violate the imposed norms. Even though Japanese and U.S. participants difer to some extent in their treatment of robots as moral agents and in the particular norms they impose on them, the two cultures show parallel patterns of greater blame for robots who fail to intervene in moral dilemmas.