Moral Uncertainty
Maximizing Expected Choiceworthiness seems reasonable on its face, but I have three worries.
(1) I’m not sure that the best reasons we have to maximize expected value in cases of empirical uncertainty generalize to the case of moral uncertainty. For example, Law of Large Numbers arguments don’t apply if the event that you’re uncertain about is by its nature unrepeatable.
(2) It’s a very strict constraint on a moral theory to say it must rate all possible actions one might take with a single real number. Is the goodness of donating to charity according to Christanity greater than or less than two utils? How does it compare to the goodness of cultivating the Aristotelian virtue of continence? I worry that intertheoretic value comparisons like these just aren’t meaningful.
(3) Even if I have sharp credences on empirical propositions–which I’m not sure I do–I definitely don’t have sharp credences on moral theories, so any practicable theory of choice under moral uncertainty has to interact somehow with fuzzy Bayesianism.
(4) MEC strongly biases you toward insatiable value systems if you live in a large world.
(1)Thanks to Duncan McClements for discussing this with me.
Suppose I entertain two moral theories $T_1$
and $T_2$
. According to $T_1$
, the goodness—choiceworthiness, if you like—of an act that makes $n$
patients well off is directly proportional to $n$
. According to $T_2$
, the goodness of such an act is instead some bounded non-decreasing function of $n$
. Supposing there are an indefinitely large number of patients, and the physical laws of my world are such that my actions can influence an appreciable fraction of them, MEC will almost always tell me to follow $T_1$
. Even if I put most of my credence on $T_2$
, the expected choiceworthiness of any act will be dominated by its goodness under $T_1$
in the large $n$
limit.
(2)This doesn’t change if in addition to your normative uncertainty, you also have empirical uncertainty about the size and laws of your world. Unless you totally dismiss the hypothesis that the world is large, MEC will tell you to do what $T_1$
would recommend in a large world.
Should this recommendation count against MEC? Maybe not, but it is a bit counterintuitive. Given our current scientific knowledge, it seems likely that we really do live in a large world, but it also seems acceptable to follow satiable moral theories, such as SFE and partialist ethics. Fully accepting MEC would take these moral theories off the table for most practical purposes.
Reading #
- MacAskill, Bykvist, and Ord’s Moral Uncertainty
- Moorhouse’s review of MU
- Gustafsson’s defense of My Favorite Theory
Last updated 29 November 2024