A theory is a G-theory iff:
- states of the world outside the subject's mind make a subject's life better, in the sense that the fact that they obtain explains why it is that one is better off. They can be the ground of prudential value. (More formally, if X is a state of the world = [the obtaining of P which is not a mental state of A; A's experience of P] and X is good/bad for A, then it can be in virtue of the obtaining of P that X is good for A)
- the theory endorses the Experience Requirement, namely: the impact on our well-being of some state of the world is entirely determined by features of the world the subject is conscious of.(More formally: if X is a state of the world, and X is good/bad for the subject A, and it is in virtue of P obtaining that X is good/bad for A (where P obtaning is not a mental state of A), then the state of the world X must include A's experience of P)
I think that although G-theories can be logically coherent, they cannot represent an authentic ethical alternative between Mental Statism (the view according to which the only things that are good for A are A's mental states) and welfare "externalism" (the view according to which states of the world of which the subject is not aware cannot intrinsically make a subject better or worse off.) I will argue that, whatever its details, a theory that qualifies as a G-theory boils down to a view that has, at most, the same ethical attractions of Mental Statism.
A defender of a G-theory will argue that one of the strongest arguments against Mental Statism is Nozick's Experience Machine argument. Since a G-theory may include goods that are not mental states (ex. agency, a functioning body, real achievements), a G-theory can stand the Experience Machine objection. (As I show at the end of this long post.)
First of all, like Mental Statism, the Experience Requirement imposes a strong constraint upon which things can be good for a subject and which cannot. For example, the Experience Requirement implies that accomplishing something in one's life does not make one's life better intrinsically (as I argue here). NOTE 1. But there is a bigger problem. The Experience Requirement produces a practical "far from the eye, far from the hearth" attitude that is responsible for the ethical convergence of G-theories and Mental Statism. Here is a SLIPPERY SLOPE for this claim.
Consider selective memory and biased perception (of the opinion of other people, of our strenghts and weaknesses, etc.) Psychologists tell us that our minds regularly indulge in these process, to a certain extent; and it may be a good thing that they do for the sake of our well-being. Selective memory and selective attention fall into the psychological cathegory of "self-deception." I think that "self-deception" is a good term for them, because to my eye it is clear that selective attention or memory can be as deceiving as real delusions.
A G-theory implies that one ends up better off by forgetting or failing to perceive facts that are bad for him (when the subject can be sure that the strategy will not be counterproductive.) This is because the Experience Requirement says that something cannot make us worse off if we don't know about it. But if we push this strategy too far, the subject ends up in a world that is not too different from a delusion. Consider the following EXAMPLE:
- George suffers from great anxiety. One day he sees the advertisement of the the "Experience Machine" program, by the M.A.T.R.I.X corporation. He understand that they may offer him an alternative to his painful life, namely a life connected to a virtual reality lacking situations that could unleash his anxiety. He finds himself faltering about the perspective of connecting himself forever to the machine, so the experts of M.A.T.R.I.X. recommend him a different product, the Attention Deflection Machine. The Attention Deflection Machine consist in a scanner and a sound-synthetizer. The scanner examines the world around George looking for situations that may unleash his anxiety. The synthetizer creates sounds that are able to deflect George's attention away from the potential source of distress.
- George finds this device much more agreeable than the Experience Machine. After all, he will be able to carry on in his life almost as before, inhabiting his own body, perceiving it, and perceiving reality. The only thing that differs is that this reality is not the whole reality, but who knows everything anyway?
- George goes home, and the machine enables him to ignore the fact that his cat is dying, until he finally forgets about it (the maid brings away the dead body of the cat.) He also stops thinking about the fact that he was recently fired and had to find a job less paid than the previous one. He does not care anymore about signs that tell him that in his new work environment people don't like him. And he feels happier than before. Not that he positively believes his cat is healthy, his new job is good compared to the old one, and his collegues love him. He just fails to realize that the opposite is true.
- Something similar happens with respect to George's perception of his own body. Of course, he does not get the sensations that he would get if he possessed a younger, stronger and more beautiful body. He does not cease feeling his body and the painful or annoying sensations that it produces. But he cannot understand their very existential meaning. For example, he does not realize his body is aging, getting weaker and less beautiful, also because he does not look at himself in mirrors. When pains are too strong, he cures the problem and forgets about them as soon as the pain is over.
In this example we have extreme forms of "selective attention", which from the point of view of G-theories are certainly good for him. (We may assume that George had no hope to improve his job situation, and the cat could not be helped.) I think that many of the reasons one can have not plug to the Experience Machine apply to the Attention Deflection Machine as well. The result of using both machines are very similar: one way or another, you end up living in a dream.
My own position is that, just as the Experience Machine, the Attention Deflection Machine would make a normal person's life a worse life for him, intrinsically, in that it fakes reality. In the case of the Experience Machine this is just more evident. (Moreover, one may dislike in particular the idea of loosing the control of one's body. But this is not all that is wrong with the Experience Machine.)
(Of course the Attention Deflection Machine can be overall in the interest of a person like George; in pathological cases, the advantage gained by depriving the person of full awareness of reality may compensate this loss, as it is the case with the drugs that are usually prescribed in pathological cases.)
Therefore the ethical objections against G-views and Mental Statism are analogous. Both theories deny the prudential value of living an authentic reality.Therefore I would conclude that, even if Mental Statism and G-views differ formally, from the ethical point of view they are substantially the same. G-theories do not deserve to be distinguished from Mental Statism as a relevant alternative in the ethical domain.
1. Summing up what I argue there: the notion of accomlishment has to be reworked to such an extent in order to be coherent with the experience requirement that we end up talking about something completely different. Although I have not examined the issue in detail, I guess that something similar is true of other "externalist" goods, such as truth, or control.