Moral Uncertainty
Sometimes you need to make a morally loaded choice without having full knowledge of the moral facts. For example, consider a lizard farmer who's considering how many lizards to raise this year. The farmer could create one hundred lizards or ten lizards, but either way, he has only one barn full of lizard feed to give them. If the lizard feed has to be split one hundred ways, each lizard will be chronically starved, and its life will be just barely worth living. But if the feed is split among ten lizards, each of them will be well-nourished and cheerful, with a life very much worth living. The twist is that the farmer is uncertain about population ethics. He entertains both total utilitarianism—which says he should raise one hundred lizards—and average utilitarianism—which says he should only raise ten. So what should he do?
One view is that in a situation like this, you should maximize expected choiceworthiness. For each available action, you determine how choiceworthy that action is according to each moral theory you entertain, multiply each choiceworthiness value by the credence you place on the corresponding theory, and add them all up. That gives you the (subjective) expected choiceworthiness of an action, and you simply choose the action whose expected choiceworthiness is highest.[1]
Maximizing Expected Choiceworthiness seems reasonable on its face, but I have three worries.
(1) I'm not sure that the best reasons we have to maximize expected value in cases of empirical uncertainty generalize to the case of moral uncertainty. For example, Law of Large Numbers arguments don't apply if the event that you're uncertain about is by its nature unrepeatable.
(2) It's a very strict constraint on a moral theory to say it must rate all possible actions one might take with a single real number. Is the goodness of donating to charity according to Christanity greater than or less than two utils? How does it compare to the goodness of cultivating the Aristotelian virtue of continence? I worry that intertheoretic value comparisons like these just aren't meaningful.
(3) Even if I have sharp credences on empirical propositions–which I'm not sure I do–I definitely don't have sharp credences on moral theories, so any practicable theory of choice under moral uncertainty has to interact somehow with fuzzy Bayesianism.
(4) MEC strongly biases you toward insatiable value systems if you live in a large world.[2] Suppose I entertain two moral theories and . According to , the goodness—choiceworthiness, if you like—of an act that makes patients well off is directly proportional to . According to , the goodness of such an act is instead some bounded non-decreasing function of . Supposing there are an indefinitely large number of patients, and the physical laws of my world are such that my actions can influence an appreciable fraction of them, MEC will almost always tell me to follow . Even if I put most of my credence on , the expected choiceworthiness of any act will be dominated by its goodness under in the large limit.[3]
Should this recommendation count against MEC? Maybe not, but it is a bit counterintuitive. Given our current scientific knowledge, it seems likely that we really do live in a large world, but it also seems acceptable to follow satiable moral theories, such as SFE and partialist ethics. Fully accepting MEC would take these moral theories off the table for most practical purposes.
[1] | Or any of them if there's a tie. See § 2.III in MacAskill, Bykvist, & Ord's Moral Uncertainty. |
[2] | Thanks to Duncan McClements for discussing this with me. |
[3] | This doesn't change if in addition to your normative uncertainty, you also have empirical uncertainty about the size and laws of your world. Unless you totally dismiss the hypothesis that the world is large, MEC will tell you to do what would recommend in a large world. |