Is it possible to model such subjective and other-regarding preferences as altruism and fairness? We stumbled upon this fascinating blog post by Dom Galeon on “Crowdsourced Morality.” Among other things, Galeon links to a five-page paper titled “Moral Decision Making Frameworks for Artificial Intelligence” by Vincent Conitzer, Walter Sinnott-Armstrong, Jana Schaich Borg, Yuan Deng, and Max Kramer, computer scientists at Duke University. Previous literature has used insights from game theory to probe problems in ethics, like how to get mutual cooperation or reciprocal altruism off the ground. What’s novel about this paper, however, is that it uses ethics to explore game theory. The paper also introduces the idea of “moral solution concepts.” Here is an excerpt:
In traditional game theory’s defense, it should be noted that an agent’s utility may take into account the welfare of others, so it is possible for altruism to be captured by a game-theoretic account. However, what is morally right or wrong also seems to depend on past actions by other players. Consider, for example, the notion of betrayal: if another agent knowingly enables me either to act to benefit us both, or to act to benefit myself even more while significantly hurting the other agent, doing the latter seems morally wrong…. The key insight is that to model this phenomenon, we cannot simply first assess the agents’ other-regarding preferences, include these in their utilities at the [leaves of a game tree or decision nodes of an extensive form game], and solve the game (as in the case of pure altruism). Rather, the analysis of the game (solving it) must be intertwined with the assessment of whether an agent morally should pursue another agent’s well-being. This calls for novel moral solution concepts in game theory.