In among the latest Notre-Dame-related bout of criticism of EA, someone tweeted something along the lines of this version of the Drowning Child thought experiment:
> You are walking along in your new expensive suit, and see a drowning child. You decide to preserve your suit and sell it for funds to save children from dying of malaria.
The author then had a somewhat self-congratulatory follow-up, then a backhanded compliment encouraging EAs to continue giving.
Richard Y Chappell had already taken an interesting angle on this, which seems to be something along the following line (I still need to read his full piece on this):
What if the person walking along has such vivid moral intuitions that they can actually see the (let's say three) other children dying of malaria alongside the drowning child?
The point here being that some amount (all?) of the (average person's?) intuitive negative reaction to the original formulation here comes down to salience - the thing in front of the person has been made immediate to the audience, while the alternative is not. Put another way, it's a rejection of coldness; if you posit (and take seriously) an agent whose reaction _is not borne out of cold logic_, that changes / should change the intuition.
I think that's a strong response, and importantly correct. However, something about the original feels fishy to me, and the fact that it's essentially supposed to be a 'gotcha' adds to that. Is there some sleight of hand occurring? Is something being smuggled in?
Going back to Singer's original thought experiment, my understanding of its point is that common sense intuition would put quite a high value on taking the action of saving the drowning child. There is a simple action that can be taken with high confidence and counterfactual impact, and even if it is very expensive (vis-a-vis the price of replacing the shoes or suit or what have you) there is a moral imperative to take it. Then, it turns out that you can actually take cheaper actions in your everyday life (in the West at least) that do essentially the same thing as saving the drowning child, if one can discount physical distance (which is the rest of Singer's project).
Quibbling over the price of the suit and the correct valuation of the child's life in the context of the thought experiment I think somewhat misses the point, though these are real and uncomfortable implementation details in the real world, where there are real trade-offs to be made. The point is that the value to me of the dollars for the expensive suit is intuitively lower than the value of those dollars to many other people.
Let's take that as the core and look at the sell-the-suit version.
- Common-sense intuition says that the drowning child is worth the cost of the suit.
- The tweet author says well actually, what if the suit could be sold for >1 saved child?
- QED if you're taking the principles seriously, you should not save the drowning child in front of you, and instead sell the suit _after the fact_ (this is implicit!).
- Implication: this is monstrous and no one should/would do this, a contradiction exists, EA is wrong.
Ah-hah: what we're essentially smuggling in here is time. Let's take that seriously, then, and consider our agent at _all three_ time periods instead: t-1 (before drowning child), t0 (at drowning child) and t1 (after drowning child).
- If you have expensive suit at t0, you should not save 1 DC, so that at t1 you can save >1 DC.
- So we know that at t1 we should be saving as many DC as possible.
- By backward induction, we should just not buy the suit at t-1, and give that money directly to saving >1 DC.
- Therefore, at t0 we won't be in the expensive suit, so we can safely save the DC in front of us, _and_ save the >1 DC elsewhere.
I claim that the sell-the-suit thought experiment is predicated on having an incoherent agent. It relies on the agent only having the choice to sell the suit _after_ being confronted with the DC - which is a sort of Saul-to-Paul-epiphany-on-the-road-to-Damascus. In essence this is just the whole point of the original thought experiment again: you should be a certain type of moral agent, and it happens that that's the kind of moral agent that wouldn't actually end up facing the dilemma to begin with, because the common sense morality answer reveals an inconsistency in how people operate day-to-day.
---
Revisiting this a little later:
My line of reasoning does not invalidate the original essentially because Singer doesn't posit an EA agent, while the sell-the-suit one does. My interpretation/understanding of Singer's is that in some sense the person in the nice suit walking along was permitted (morally) to buy the suit in the first place, but _common sense_ morality says that the drowning child is worth more and should be saved. But hang on - why shouldn't _this_ agent have sold the suit to begin with? Well, because of the inconsistency in common sense morality here:
- Person buying suit spent $X. No child in eyesight could be saved for $X, so that's fine.
- Walking along, there is now a child in eyesight who is worth >$X.
- Person should sacrifice suit.
Singer's whole point is that the thing that's the contradiction is the _eyesight_ piece, not anything else. So our agent erred in buying the suit to begin with, actually, and we just show this by kind of intuition pumping them on presenting them with the drowning child. _Preserving the intuition pump_ rather than preserving any other aspect of the original setup is basically the cheat, I think.
Overall I think some of the other aspects of the experiment can be challenged on empirical grounds - like saving the child might generate more PR than the cost of the suit, so it's net positive EV; maybe there's empirical uncertainty about exactly how many lives you save in expectation with $X, so it's a case of 'a bird in the hand is worth two in the bush'; various others. Obviously you can modify the thought experiment to preclude those, at which point there are bullets I would bite, but ultimately I don't feel like I got presented with a real bullet in the sell-your-suit formulation.
(I think I'm still not fully articulating something here...)