Original: https://www.goodthoughts.blog/p/shuffling-around-expected-value I keep thinking about this one, so I'm noting it down! It's a really powerful idea, and in my opinion I think it would straight-up obligate a lot of people to agree with the conclusion. Like there are a lot of people who would be internally inconsistent, arguing in bad faith, hypocritical, or some combination of the three if they rejected it. However, I'm not sure if it's fully a knockdown against *all* counterarguments. I think it kind of hinges on the somewhat sneaky assertion: > If your reason for rejecting a higher-EV action is _solely_ that it has a higher probability of saving no-one at all, then _you are failing to be guided by moral consideration for the individuals whose lives are at stake_. You’re almost certainly doing something more selfish, like anticipating how you will feel bad and regretful when the die picks out a square that happens to be empty. The Carlsmith essay on [Expected Utility Maximization](https://www.utilitarianism.net/guest-essays/expected-utility-maximization/) that Chappell references doesn't quite address this issue either; it focuses on getting someone to take 'worse and worse' (by the lights of the risk averse stance) bets, and showing them how far they've come, and then also this shuffling / identity / veil-of-ignorance approach. I think there are conceivably reasons someone "pulling the lever" might want the people to spread themselves out over the boxes evenly - i.e. to be risk-averse. 1. You don't expect to be back in this position. This seems like the most straightforward, and connects this with the rationality of being averse to risk of ruin in things like the stock market. IIRC, there's a Kahnemann-Tversky experiment (or similar idea) on offering people cigarettes, and they often accept contrary to other preferences or something like that. That's because this is clearly an exceptional circumstance - maybe you *would* naturally turn it down if it was your mate Danny offering one to you every time you clocked in at work! One of the arguments *for* EUM in and around, say, Effective Altruism is that taking a hits-based approach produces higher EU overall - or 'as produced by the community' - and so "we" would like individual agents to be agnostic as to whether *they* are the particular agent that *hits*, or one of many that misses. It might be preferable to everyone involved that any agent that finds itself in this kind of position *is the kind of agent* that would be risk-neutral, as that means that over *many* such situations - really, more specifically, as $n \rightarrow \infty$ - EU is maximised. So this also maybe depends on... 2. One's stance on timeless decision theory? 3. Pascal's Mugging And if you buy that risk aversion is reasonable in some cases, then we're just left to haggle over what "premium" we should be willing to "pay" (e.g., could we reasonably prefer 99% chance of 10 over 1% chance of 1000 and so on). Chappell covers this somewhat in the section "A practical proviso", but how does this square with e.g. [Astronomical Cake](https://www.goodthoughts.blog/p/astronomical-cake) - after all, [The Trolley Problem is No Guide to Practical Ethics](https://www.goodthoughts.blog/i/143371414/the-trolley-problem-is-no-guide-to-practical-ethics), right?