Discussion about this post

User's avatar
Silas Abrahamsen's avatar

Interesting theory! I missed responding on the first post, but I thought I'd add some thoughts.

First, it doesn't seem like we have to solve the monoculture problem in our population ethics. The core of the problem rather seems to be whether you're an "internalist" about welfare or not--i.e. whether the welfare of someone depends only on their internal states, or whether it also depends on what's going on around them. If you are, then you can just tile the world with intrinsic duplicates of whoever is most well off. But if not, then that's not guaranteed.

Consider, for instance, an objective list theory which says that relationships intrinsically count towards welfare, and which holds that relationships require some difference in the people related (you'd not have as valuable a relationship with a clone of yourself). In that case, we couldn't just tile the world with copies of happy people and be done with it, but we would have to have communities or something like that.

Now, at a large enough scale, you could find "units" that you could tile with. But that doesn't strike me as nearly as counterintuitive as the original worry! If we copied an upgraded version of the earth over and over again, that's not obviously worse than a more pluralistic world.

I don't think this is the same as denying separability. We can formulate separability as whether how much someone with a fixed level of welfare counts towards the total good depends on external circumstances or not. But this is about whether the welfare itself depends on external circumstances. I think it's very intuitive that how good some person's welfare is doesn't depend on external circumstances, but it's not nearly as obvious that the welfare itself doesn't depend on external circumstances.

(I'd also just register that I'm generally less comfortable with giving up abstract intuitions like separability, than with more concrete judgements like aversion to the repugnant conclusion or monoculture. Fwiw.)

Second, if the universe is infinitely large, and sufficiently random or varied, then presumably every realizable experience-type is instantiated infinitely many times. But on Saturationism, the marginal value of a type diminishes toward a limit as more instances occur. So it seems that every region of experience-space would already be fully saturated.

If that is right, then it seems that either the world already has maximal possible value, or

at least, no finite local action can add value.

In either case, the view seems to collapse into a kind of evaluative paralysis in sufficiently rich infinite universes. You mention that the saturation view handles infinite ethics well, but I'm not sure I see how it handles this. Am I missing something?

Every theory of population ethics has difficulties with infinity. But this problem seems especially severe for Saturationism because the theory is non-separable: whether something good happens here depends intrinsically on what exists elsewhere in the universe. Under more separable views, we can still say “this local change is good,” even if global comparisons between infinite worlds are problematic. Saturationism appears to lose even that.

I'll note that I haven't read the longer writeup on the theory, so you may very well have good things to say to all of these worries.

No posts

Ready for more?