To sum up a bit, I shall call a "G-theory" of welfare any theory 1. according to which what makes a state (of the world) good for the subject A is something different from its implying the occurrence of (certain sorts of) mental states in A; and 2. accepts the experience requirement.
There are at least two forms of the experience requirement:
the impact on A's well-being of some state of the world is entirely determined by features of the world A is conscious of.
all states of affairs that are good for person A include necessarily at least both something valuable and A's awareness of it.
Here I shall be discussing the weaker form.
My point is that, even if all G-theories are different from Mental Statism (the view according to which the only things that are good for a person are his or her mental states), G-theories still exclude too many states from contributing intrinsecally to welfare, and this is pretty much in the spirit of Mental Statism.
Consider the following example I 've read in Richard's blog.
Molly the MathematicianI think that 1 is better for Molly given her convictions and her values. Molly achieved something valuable, namely giving an enduring contribution to human knowledge, both from her point of view and objectively. (Notice that in case 1 the proof's validity is eventually recognized, and therefore it contributes to human knowledge.)
Suppose Molly spent her whole life trying to prove a fiendishly difficult theorem. She finally thinks she's achieved it, and has some other mathematicians check her proof. Molly receives their answer, believes it wholeheartedly, then dies the next day. It is later discovered that she was told the wrong answer.
Which of the following scenarios is better for Molly?
1) The mathematicians (mistakenly) tell her the proof is flawed, when in fact it is correct.
2) The mathematicians (mistakenly) tell her the proof is correct, when in fact it is flawed.
Yet, if somebody agrees with my intuitions about Molly, he cannot accept the Experience Requirement. The Experience Requirement says that Molly's achievement of something valuable cannot make her life better, because her achieving the (valuable) aim she has striven for is not a fact of which she is aware.
Therefore in a G-theory in which (let us suppose) achievement is intrinsically prudentially valuable but pleasure is not , 1 and 2 come out as indifferent: 2. has no value because it lacks the achievement, 1 has no value because it lacks experience.
Or take this other example, that I also read in Richard's blog (who read it somewhere else)
Imagine a mad scientist kidnaps you and your family, and offers you the following two options:In this case as well, any G-theory that can exist will force you to say that, if your choice should take into account only what is good for you, then 1 and 2 are either indifferent, or 2 is better than 1.
(1) He will let your family live in a pleasant but secluded captivity, but you will be made to believe (eg through hypnosis, or whatever) that they were all tortured and killed.
(2) He will torture and kill your family, but you will be made to believe that they are safe and well in a pleasant but secluded captivity.
After making the choice, all recollection of the bargain will be erased from your memory.
(A G-theory in which pleasure is intrinsically good, or in which certain effects of pleasure can be, would rank 2 over 1, one in which pleasure is not a good may rank the two as indifferent. But the point here is: no G-theory will ever rank 1 over 2).
What do I think about such G-views?
Taking it for granted that a G-view is coherent from the logical point of view, it fails to be coherent in a ethical sense. A G-theory will allow things other than mental states to contribute intrinsically to welfare or what is good for a person. But the ethical evaluations of a person who endorses a G-theory sound like the ones of a value-dyslexic!
A G-theorist rational egoist, that we shall call Gian, should choose to take a drug that allows her to forget the harm inflicted by her children, because the evil contained by that state of affairs cannot be bad for her unless she is aware of it. Yet she would say that what has value for her is the fact that her children's lives go well or badly- not how she believes those lives to be. This is logically coherent, but since Gian claims that the disvalue of the previous state of the world for her life derives from the feature < my children's life going badly > and not at all from the feature < my having awareness of my children's life going badly > her choice seems to be a very odd way to respect that value she recognizes!
Also, it seems strange that Gian, who would choose hypnosis in order to forget a fact that is potentially bad for her, should also not allow an ipnotist to make her believe in a fact that never took place, and that she should refuse to hook up to an Experience Machine!
This attitude seems to me to be lacking in coherence, the sort of coherence that we expect in an evaluator. We all know that the coherence of a theory, a view, of an attitude and of a character entails something more than mere absence of logical contradiction. Gian falls short of logical incoherence. But she looks like a value-dyslexic.
(A state similar to dyslexia, in the sense that she cannot manage to put together the deliverances of her "moral perception" in an orderly an integrated fashion.)
The attitude I have called dyslexic can be summarized by the two mottos:
1"x makes my life go better intrinsically, but not if I don't know about it"
2"x makes my life go worse intrinsically, but not if I don't know about it"
It seems to me that if we would ever hear someone say such a thing, we would ask him:
"what the hell do you mean when you say that X contributes to your welfare intrinsically?"
"Are you sure it is not the case that X makes your life go better/worse extrinsically, in proportion to its contribution to your experiences?"
The burden of answering to the first of this question, is on someone who would argue in favor of such a theory.
Another way to show the oddity of a G-theory, is by means of a slippery slope argument, that shows that it can reduce to something very similar to the Mental Statism. I'll deal with this in my next post.