Publication Date
8-2017
Abstract
The media and academic dialogue surrounding high-stakes decisionmaking by robotics applications has been dominated by a focus on morality. But the tendency to do so while overlooking the role that legal incentives play in shaping the behavior of profit-maximizing firms risks marginalizing the field of robotics and rendering many of the deepest challenges facing today’s engineers utterly intractable. This Essay attempts to both halt this trend and offer a course correction. Invoking Justice Oliver Wendell Holmes’s canonical analogy of the “bad man . . . who cares nothing for . . . ethical rules,” it demonstrates why philosophical abstractions like the trolley problem—in their classic framing—provide a poor means of understanding the real-world constraints robotics engineers face. Using insights gleaned from the economic analysis of law, it argues that profit-maximizing firms designing autonomous decisionmaking systems will be less concerned with esoteric questions of right and wrong than with concrete questions of predictive legal liability. Until such time as the conversation surrounding so-called “moral machines” is revised to reflect this fundamental distinction between morality and law, the thinking on this topic by philosophers, engineers, and policymakers alike will remain hopelessly mired. Step aside, roboticists—lawyers have this one.
Recommended Citation
Bryan Casey,
Amoral Machines, or: How Roboticists Can Learn to Stop Worrying and Love the Law,
111
Nw. U. L. Rev.
1347
(2017).
https://scholarlycommons.law.northwestern.edu/nulr/vol111/iss5/4
Included in
Law and Economics Commons, Law and Psychology Commons, Science and Technology Law Commons, Torts Commons