Abstract: “I argue that there is an important sense in which all decisions are moral decisions and I explore some implications of this insight (and its denial) for the design and human impacts of increasingly complex automated systems and emerging autonomous systems. This insight is obscured when we think about automated systems by the social division of labor between designers and users. When we think about autonomous systems it is obscured by a misplaced focus on moral dilemmas (e.g., trolly problems). I will discuss different roles for moral values in decision making (e.g., as filters on choice, as utilities, and as defaults), how those values are encoded in social practices in which automated systems are imbedded and deep challenges to making autonomous systems that can navigate them intelligently.”