Wittgenstein argued that one cannot have a private language. And by “private language”, I don’t think he meant a language that only one person knew. Tolkien invented an Elven language. (Actually, he invented two such languages.) Before he shared them with the others, he was the only one who knew them, the original speakers having died thousands of years ago...
— Roderick Long, “The Moral Standpoint” (video)
I was thinking today about how people misuse the so-called Golden Rule of ethics.
The Golden Rule states: don’t do to others what you don’t want done to yourself. But a question can be asked: why not? What would motivate me not to? As Roderick Long puts it, “what is the force of that statement?” Oftentimes, when the atheist moralists try to explain this rule, they say: the more you do X, the more likely it becomes for X to be done to yourself. E.g., if you murder, you make the society more murderous and increase the probability of someone murdering you.
Of course, that’s nonsense. Statistically, it works out on the level of a small community, but in a large community because I once steal someone’s wallet, the probability of my wallet being stolen will not increase visibly. Furthermore, what if I do my immoral act in a way that nobody knows about it (and such that it will not leak out). Or, if I am sure that the same thing that I am doing will never be done to me. Just because a Roman senator used slaves didn’t mean that he made it more likely for himself to become a slave. (Also, interestingly enough, when the Africans who had been slaves were freed or escaped from the slavery, when given a chance, they would enslave other Africans. Such way was the nation of Liberia founded — by former American slaves coming to Africa and enslaving Africans from other tribes.)
Plus, such a definition of morality goes against the common perception of what morality is and against the definition of absolute morality. Such a “morality” is nothing but self-preservation. What atheists usually answer is: “Well, there is no such thing as absolute morality.” I think, again, that answer is nonsensical. Morality has to be absolute. One could say that there is no morality. But they are afraid to go out and say it in such an obvious way (partially because they do not instinctively believe that themselves).
I think on the emotional level, the Golden Rule works by allowing you to imagine what it would be like for you to be at the receiving end of your act. And by feeling emotionally disgusted by it, you can also feel emotionally disgusted to do the same act to others. But, in my opinion, the emotions are not a very solid foundation for morality.
Just because you feel a certain emotion, does it mean everyone should feel it? Even if everyone feels it but one person — so, why should everyone judge this person for not feeling some emotion? With the emotions being subjective reactions, how can they be a basis for proclaiming something to be objectively moral or immoral? I dislike cockroaches. I find the idea of eating cockroaches disgusting. But do I find it immoral, kashrus and vegetarian issues aside?
Although, actually, let’s not put them aside. Many people feel (specifically feel) that it is immoral to eat animals. They are disgusted by the idea of a human being eating another living being when he has a choice to eat plants. But many others don’t find eating animals disgusting in any way. (I, by the way, find the idea of eating very intelligent animals such as dolphins, whales or apes disgusting.)
The same goes with experimentation on animals. Even when the animals are treated humanely (even using the strictest definition of that word) — these people believe that killing animals to find cure for cancer or schizophrenia (one of which I am partially involved in right now) is as immoral as killing people to find cure for cancer. But others don’t feel that way; in fact, they feel that not killing animals to save human lives is immoral. Are the people who feel a certain emotion superior to those who don’t? How do we figure out which of the emotions is right? (Is there even such a thing as a “right” emotion?)
What if I said that I feel disgusted by the idea of robbing one group of people to help another group out? Many people certainly feel disgusted by this (even though they believe in private, voluntary charity). But many people feel disgusted by the government not robbing rich people to help the poor — in fact, some of these people feel disgusted by the idea that there are rich and poor at all and feel that the rich should be made equally poor, like it was done in Russia. Others find the idea of a bunch of thugs taking away one’s savings that he worked hard all his life to accumulate even more disturbing.
What are we to do with all these conflicting emotions? Let’s imagine a person incapable of feeling emotions (who agrees that he is deficient in this way, but honestly tries to figure out what is the moral thing to do in each situation — let’s imagine he has enough emotions to care). How is he to figure out what the right thing is? Should he take a poll?
I think the proper application of the Golden Rule is as follows. Suppose one already, for whatever reason, believed in the existence of absolute morality. I.e., he believed there is such a thing as good and evil. And not necessarily as a result of believing in G-d; libertarians, for example, claim to believe in absolute morality — they don’t deny the existence of G-d, but they don’t base their beliefs in morality based on G-d necessarily. (By the way, I think, whatever one says, most people’s view of morality is still absolute — at least in the Western world. Of course, it could be because of the cultural influence of Christianity.) This idea also exists in Judaism — that regarding some (or all) issues of morality, people should be able to figure what is moral and what is not without G-d telling us.
So, in that case, one could say: what is good for you is also good for me. (Not at the same time. Meaning, if being healthy is a good thing for you, then being healthy is a good thing for me too. Not that being healthy for you is also automatically bringing me good.) So, a simple way to figure out whether something is good is to try it on yourself. If you know that it is something that you would define as something bad for yourself, then it’s also something bad for someone else, unless you can demonstrate that there is an objective difference between you. (E.g., eating peanuts may be a bad thing for me if I am allergic to them, but not for you if you’re not. Of course, then you could abstract and say: if having an allergic reaction is bad for me, it’s also a bad thing for you.)
Notice that in this case I am not using emotions to justify morality. I already know (from whatever source — again, I am not clear on this) that there is such a thing as good and evil, and both good and evil can be absolute (or, as the philosophers would say, agent-neutral, such as with non-private languages). Emotions merely help me to identify some particular event or object as good or evil. It’s the same as being able to tell whether fish has gone bad by smelling it. What if someone doesn’t have a good sense of smell? Well, he can still agree that there is such a thing as fish going bad; he just can’t use his nose to identify that happening. The same way, a psychopath could still agree to the idea of good and evil; he just couldn’t use Golden Rule for an easy identification of what they were. (Although, a psychopath probably knows when something is bad for him — so, he could still maybe use the Golden Rule in such a case. Of course, his problem might be that even if he knew something was evil, he just wouldn’t care.)
A number of problems can be pointed out. For instance, if I am running for a political office, I wouldn’t like to lose an election. And neither would the person I am running against. So, should I just let him win? If you don’t like the idea of political elections (a libertarian could argue that the situation of majority oppressing the minority through political means is immoral; just like the idea of Windows users forcing Apple users to “come to the light” and start using the PCs would be immoral — why can’t there be multiple law systems in the society just like there are multiple OSs?), you can use the idea of competing for market. I certainly wouldn’t want people to stop buying my product when a newer and a better product is introduced to the market. So, if I can introduce a better product, which will reduce the amount of business for someone supplying an older product, is that immoral?
Of course, one could answer that the amount of good I do by supplying the product well outweighs the amount of bad I do, but this is already a utilitarian approach which cannot work for praxeological reasons as well as the reason of definition of absolute morality (if killing one person benefits a million people, or if exterminating one particular ethnicity benefits all the others, should we do it?).
So, I suppose I am still thinking about it. One answer could be that the Golden Rule is after all not such a useful rule.
Alternatively, one could say (again, quoting Roderick Long) that “if I consider my pursuit of my well being as legitimate for me, I have to view your pursuit of your well being as legitimate for you”, and that there is a difference between me pursuing my well being without actively trying to harm you (which may happen if we are pursuing well being competing for the same scarce resources) and me trying to actively harm you.
Meaning, importantly, that both rules (about legitimacy of my pursuit of well being and of your pursuit of well being) are true true at the same time — and therefore cannot be placed in contradiction of each other. It is ethically permissible for myself to pursue my well being, but not in such a way that will harm your pursuit of your well being.
How to deal with the situations when the are seemingly in contradiction is what the libertarian view on ethics deals with by basing ethics on the concept of property rights: I don’t have a right to take what’s already yours, but I do have a right to take something which doesn’t belong to you, even though by doing so I am precluding you to use it for yourself. Of course, this leads to a conflict of rights, but since one of us has to win — since both of us cannot use the same object at the same time (i.e., the object is scarce) — let it be the one who homesteaded the object first. But in the cases when there is no scarcity (such as, in the case of intellectual “property”) it is immoral for me to use force to preclude you from using the said non-scarce resource, since it goes against your legitimate right of pursuit of your well being (which, in this case, is not in contradiction with my rights).