Cognitive dissonance is a theory of human motivation that
asserts that it is psychologically uncomfortable to hold contradictory
cognitions. The theory is that dissonance, being unpleasant, motivates a
person to change his cognition, attitude, or behavior. This theory was first
explored in detail by social psychologist Leon Festinger, who described it
this way:
Dissonance and consonance are relations among cognitions that is, among opinions, beliefs, knowledge of the environment, and knowledge of one's own actions and feelings. Two opinions, or beliefs, or items of knowledge are dissonant with each other if they do not fit together; that is, if they are inconsistent, or if, considering only the particular two items, one does not follow from the other (Festinger 1956: 25).
He argued that there are three ways to deal with cognitive
dissonance. He did not consider these mutually exclusive.
- One may try to change one or more of the beliefs, opinions, or behaviors involved in the dissonance;
- One may try to acquire new information or beliefs that will increase the existing consonance and thus cause the total dissonance to be reduced; or,
- One may try to forget or reduce the importance of those cognitions that are in a dissonant relationship (Festinger 1956: 25-26).
For example, people who smoke know smoking is a bad habit.
Some rationalize their behavior by looking on the bright side:
They tell themselves that smoking helps keep the weight down and that there is a greater threat to health
from being overweight than from smoking. Others quit smoking.
Most of us are clever enough to come up with
ad hoc hypotheses or
rationalizations to save cherished notions. Why we can't apply this cleverness
more competently is not explained by noting that we are led to rationalize
because we are trying to reduce or eliminate cognitive dissonance. Different
people deal with psychological discomfort in different ways. Some ways are
clearly more reasonable than others. So, why do some people react to
dissonance with cognitive competence, while others respond with cognitive
incompetence?
Cognitive dissonance has been called "the mind controller's
best friend" (Levine 2003: 202). Yet, a cursory examination of cognitive
dissonance reveals that it is not the dissonance, but how people deal with
it, that would be of interest to someone trying to control others when the
evidence seems against them.
For example, Marian Keech (real name: Dorothy Martin) was the leader of a UFO cult in
the 1950s. She claimed to get messages from
extraterrestrials, known as The Guardians, through
automatic writing. Like the
Heaven's Gate folks
forty years later,
Keech and her followers, known as The Seekers or The Brotherhood of the
Seven Rays, were waiting to be picked up by flying saucers. In Keech's
prophecy, her group of eleven was to be saved just before the earth was to
be destroyed by a massive flood on December 21, 1954. When it became evident
that there would be no flood and the Guardians weren't stopping by to pick
them up, Keech
became elated. She said she'd just received a telepathic message from the Guardians saying that her group of believers had spread so much light with their unflagging faith that God had spared the world from the cataclysm (Levine 2003: 206).
More important, the Seekers didn't abandon her. Most became more
devoted after the failed prophecy. (Only two left the cult when the world
didn't end.) "Most disciples not only stayed but, having made that decision,
were now even more convinced than before that Keech had been right all
along....Being wrong turned them into true
believers (ibid.)." Some people will go to bizarre lengths to avoid inconsistency
between their cherished beliefs and the facts. But why do people interpret the
same evidence in contrary ways?
The Seekers would not have waited for the flying saucer if they thought it
might not come. So, when it didn't come, one would think that a competent thinker would have seen
this as falsifying Keech's claim that it would come. However, the incompetent thinkers
were rendered incompetent by their devotion to Keech. Their belief that a
flying saucer would pick them up was based on faith,
not evidence. Likewise, their belief that the failure of the prophesy
shouldn't count against their belief was another act of faith. With this kind
of irrational thinking, it may seem pointless to produce evidence to try to
persuade people of the error of their ways. Their belief is not based on
evidence, but on devotion to a person. That devotion can be so great that even
the most despicable behavior by one's prophet can be rationalized. There are
many examples of people so devoted to another that they will rationalize or
ignore extreme mental and physical abuse by their cult
leader (or spouse or boyfriend). If the basis for a person's belief is irrational faith grounded in
devotion to a powerful personality, then the only
option that person has when confronted with evidence that should undermine her faith
would seem to be to continue to be irrational, unless her faith was not that
strong to begin with. The interesting question, then, is not about
cognitive dissonance but about faith. What was it about Keech that led some
people to have faith in her and what was it about those people that made them
vulnerable to Keech? And what was different about the two who left the cult?
"Research shows that three characteristics are related to persuasiveness:
perceived authority, honesty, and likeability" (ibid. 31). Furthermore, if a
person is physically attractive, we tend to like that person and the more we
like a person the more we tend to trust him or her (ibid. 57). Research also
shows that "people are perceived as more credible when they make eye contact
and speak with confidence, no matter what they have to say" (ibid. 33).
According to Robert Levine, "studies have uncovered surprisingly little
commonality in the type of personality that joins cults: there's no single
cult-prone personality type" (ibid. 144). This fact surprised Levine. When he
began his investigation of cults he "shared the common stereotype that most
joiners were psychological misfits or religious fanatics" (ibid. 81). What he
found instead was that many cult members are attracted to what appears to be a
loving community. "One of the ironies of cults is that the craziest groups are
often composed of the most caring people (ibid. 83)." Levine says of cult leader Jim
Jones that he was "a supersalesman who exerted most every rule of persuasion"
(ibid. 213). He had authority, perceived honesty, and likeability. It is
likely the same could be said of Marian Keech. It also seems likely that many
cult followers have found a surrogate family and a surrogate mother or father
or both in the cult leader.
It should also be remembered that in most cases people have not arrived at
their irrational beliefs overnight. They have come to them over a period of
time with gradually escalated commitments (ibid. chapter 7). Nobody would join
a cult if the pitch were: "Follow me. Drink this poisoned-but-flavored
water and
commit suicide." Yet, not everybody in the cult drank the poison and two
of Keech's followers quit the cult when the prophecy failed. How were they
different from the others? The explanation seems simple: their faith in their
leader was weak. According to Festinger, the two who left Keech--Kurt Freund
and Arthur Bergen--were lightly committed to begin with (Festinger 1956: 208).
Even people who erroneously think their beliefs are scientific may come
by their notions gradually and their commitment may escalate to the point of
irrationality. Psychologist Ray Hyman provides a very interesting example of
cognitive dissonance and how one chiropractor dealt
with it.
Some years ago I participated in a test of applied kinesiology at Dr. Wallace Sampson's medical office in Mountain View, California. A team of chiropractors came to demonstrate the procedure. Several physician observers and the chiropractors had agreed that chiropractors would first be free to illustrate applied kinesiology in whatever manner they chose. Afterward, we would try some double-blind tests of their claims.
The chiropractors presented as their major example a demonstration they believed showed that the human body could respond to the difference between glucose (a "bad" sugar) and fructose (a "good" sugar). The differential sensitivity was a truism among "alternative healers," though there was no scientific warrant for it. The chiropractors had volunteers lie on their backs and raise one arm vertically. They then would put a drop of glucose (in a solution of water) on the volunteer's tongue. The chiropractor then tried to push the volunteer's upraised arm down to a horizontal position while the volunteer tried to resist. In almost every case, the volunteer could not resist. The chiropractors stated the volunteer's body recognized glucose as a "bad" sugar. After the volunteer's mouth was rinsed out and a drop of fructose was placed on the tongue, the volunteer, in just about every test, resisted movement to the horizontal position. The body had recognized fructose as a "good" sugar.
After lunch a nurse brought us a large number of test tubes, each one coded with a secret number so that we could not tell from the tubes which contained fructose and which contained glucose. The nurse then left the room so that no one in the room during the subsequent testing would consciously know which tubes contained glucose and which fructose. The arm tests were repeated, but this time they were double-blind -- neither the volunteer, the chiropractors, nor the onlookers was aware of whether the solution being applied to the volunteer's tongue was glucose or fructose. As in the morning session, sometimes the volunteers were able to resist and other times they were not. We recorded the code number of the solution on each trial. Then the nurse returned with the key to the code. When we determined which trials involved glucose and which involved fructose, there was no connection between ability to resist and whether the volunteer was given the "good" or the "bad" sugar.
When these results were announced, the head chiropractor turned to me and said, "You see, that is why we never do double-blind testing anymore. It never works!" At first I thought he was joking. It turned it out he was quite serious. Since he "knew" that applied kinesiology works, and the best scientific method shows that it does not work, then -- in his mind -- there must be something wrong with the scientific method. (Hyman 1999)
What distinguishes the chiropractor's rationalization from the cult
member's is that the latter is based on pure faith and devotion to a guru or
prophet, whereas the former is based on evidence from experience. Neither
belief can be falsified because the believers won't let them be falsified:
Nothing can count against them. Those who base their beliefs on experience
and what they take to be empirical or scientific evidence (e.g., astrologers, palm
readers, mediums,
psychics, the intelligent design
folks, and the chiropractor) make a pretense of being willing to test
their beliefs. They only bother to submit to a test of their ideas to get
proof for others. That is why we refer to their beliefs as
pseudosciences. We do not refer to the beliefs
of cult members as pseudoscientific, but as faith-based irrationality.
There is scant evidence that the
chiropractors Wally Sampson and Ray Hyman tested take the
stand they do in order to relieve cognitive dissonance. They didn't
just
reject the results of a single test, they rejected
scientific testing altogether in favor of what they think they
know from personal experience. Why? Because they consider personal
experience
superior to double-blind controlled experiments. Why? To avoid
having to deal with cognitive dissonance? What evidence is there that
these chiropractors were made the least bit uneasy by holding a
belief that
conflicts with the rest of the scientific community? If a person is
made
psychologically uncomfortable by contradictory cognitions, shouldn't
there
be some way to measure this discomfort, such as a rise in the level
of
cortisol or other stress hormones? Has anyone defending cognitive
dissonance
ever measured stress hormones being aroused by dissonant beliefs or
relieved
by rationalization? The chiropractors' misguided belief
is probably not due to worrying about their self-image or removing
discomfort. It is more likely due to their being arrogant and
incompetent
thinkers, convinced by their experience that they "know" what's
going on,
and probably assisted by
communal reinforcement from the like-minded arrogant and incompetent
thinkers they work with and are trained by. They've seen how AK works with
their own eyes. They've demonstrated it many times. If anything makes them
uncomfortable it might be that they can't understand how the world can be so
full of idiots who can't see with their own eyes what they see!
To return to Festinger's own
example, what is gained by saying that the two who left the cult had a
light commitment to begin with? How is commitment measured? Do those who
see the light and change their mind when the evidence contradicts their belief have a
light belief. If we apply Occam's razor to the theory of
cognitive dissonance, is there anything left after we explain how anyone
deals with beliefs that conflict with the evidence by the more familiar
concepts of changing one's mind in light of new evidence, rationalization, self-deception, irrational faith,
confirmation bias, overestimation of one's
intelligence and abilities, and the like?
I don't think so. We shouldn't forget that some people, when
confronted with strong evidence against cherished beliefs, give up
their cherished beliefs, e.g., the "distinguished stratigraphy
professor" at Columbia University, praised by Stephen Jay Gould, who had
initially ridiculed the theory of drifting continents but “spent his
last years joyously redoing his life’s work” (Ever Since Darwin, W.W. Norton & Company, 1979: 160).
Can we really
explain why Sylvia Browne or the members of the military junta in Myanmar
can sleep at night (assuming they do!) by appealing to the "theory of cognitive dissonance"?
There are people who know what they are doing is wrong and don't care. Even
a simple case that is often brought up by the defenders of the theory of
cognitive dissonance — the case of the smoker who continues his habit of
smoking even though he knows smoking is unhealthy — doesn't measure up. What
is so cognitively uncomfortable about knowing that smoking is unhealthy and
doing it anyway?
There are people who
know what they are doing is wrong, but they have such contempt for the rest
of us that it doesn't make them the slightest bit uncomfortable conning us.
What evidence is there that people who do bad things or believe what they
should know is false are concerned about their self-image? Do mafia hit men
have to deal with cognitive dissonance so they can sleep at night? I'd like
to see the empirical study on that one.
If cognitive dissonance were a problem, it would show up at the level
of methods used to evaluate beliefs. Yet, many people seem to have no
discomfort using science, logic, and reason to establish one set of beliefs,
while using desire, feelings, faith, emotional attachment to a charismatic
leader, and the like to establish another set of beliefs.
On the other hand, who am I to disagree
with more than a half-century of scholarship in the social sciences that
has firmly established the concept of cognitive dissonance? As the
authors of the Wikipedia article on the topic
write: "It is one of the most influential and extensively studied
theories in social psychology." I don't deny that the concept has been
influential. Nor do I deny that it has been extensively studied. What I
see, however, when I look at the kinds of studies used to support the
validity of the concept is a lot of confirmation bias and something akin to the psi assumption
in parapsychology. The general form of the studies in support of
cognitive dissonance goes like this: we predict that x will happen if we
do y; if x happens when we do y it is because of cognitive dissonance; x
happened when we did y, so cognitive dissonance is confirmed. What
I don't see is any attempt to formulate a test of the hypothesis that
could falsify the claim that cognitive dissonance causes anything.
Researchers even go so far as to claim evidence for cognitive dissonance by finding activity (using an fMRI) in the dorsal anterior cingulate cortex and anterior insula during a test that postulated that cognitive dissonance was occurring when those parts of the brain showed activity.
This reasoning seems circular at best. It begs the question. Of the
innumerable possible explanations for seeing what was seen in the fMRIs,
why should we assume they indicated cognitive dissonance?
Festinger and Carlsmith claimed to have found evidence for cognitive dissonance in their 1959 study Cognitive Consequences of Forced Compliance. Their database consisted of data collected on 71 male students in the introductory psychology course at Stanford University who were "required to spend a certain number of hours as subjects (Ss) in experiments." (The data for 60 of the students was used in the final calculations, 20 subjects in each of three groups. In other words, this was a very small study from which no grand conclusions should have been drawn.) They spent an hour doing some boring, tedious task like turning pegs a quarter turn repeatedly. It was assumed that doing something pointless for an hour would generate a strong negative attitude regarding the task. Unless you have special neural wiring, it seems reasonable to assume that you would be bored by the task, but whether you would develop a strong negative attitude toward it seems questionable. After all, you are in a psych class, you're trying to learn something, and participation in an experiment is a course requirement. Anyway, after completing the boring task for an hour some of the subjects were asked to talk to someone introduced as another subject in the experiment but actually an actor, and try to persuade him that the task was interesting and engaging. Some subjects were paid $20; some were paid $1. (Today, you might get 4 pints of beers for $20; in 1959 you could probably get 100 pints of beer for $20. In other words, to most college students in 1959, $20 would have represented a small windfall. Consider, however, that these are Stanford students in 1959, many of whom may not have found much difference between $1 and $20.) One group of subjects was used as a control; these subjects weren't asked to talk to anybody about the task.
At the end of the study, the subjects were asked to rate "how enjoyable" the boring tasks were on a scale of -5 to +5. The average rating for the 20 students in the control group was -.45; the average for those paid $20 was -.05; and the average for those paid $1 was +1.35.
Consider also that when the subjects were asked how much they learned on a scale of 0-10, the groups rated themselves about equally at about 3. If the $1 group had rated their learning at 5, would that have been taken as evidence of cognitive dissonance? The stat I find the most interesting, however, is the one regarding whether the subjects would participate in a similar experiment in the future. None of the groups was very enthusiastic about doing so, but the $1 group was significantly more willing to do so that the other two groups. On a scale of -5 to +5, the $1 group averaged +1.2, while the control and $20 groups averaged -0.62 and -.025 respectively. Again, an outlier or two in the $1 group might be the main reason for the difference in averages. Or there might be some other reason. With such a small sample, it would seem reasonable to suspect that there might be some other difference between the $1 group and the others that has nothing to do with cognitive dissonance. In any case, even if this study were redone with the same results using 600 subjects, I would still question whether the differences should be explained by cognitive dissonance. Paying people a little bit of money to do a trivial task and then lie about it to someone else might not require any justification in the context of a psychology experiment at Stanford University. After all, it's just an experiment. Paying people a lot of money may have created less incentive by making the task less enjoyable. A token payment may have created the illusion that the subjects were making an important contribution to science.
Festinger and Carlsmith claimed to have found evidence for cognitive dissonance in their 1959 study Cognitive Consequences of Forced Compliance. Their database consisted of data collected on 71 male students in the introductory psychology course at Stanford University who were "required to spend a certain number of hours as subjects (Ss) in experiments." (The data for 60 of the students was used in the final calculations, 20 subjects in each of three groups. In other words, this was a very small study from which no grand conclusions should have been drawn.) They spent an hour doing some boring, tedious task like turning pegs a quarter turn repeatedly. It was assumed that doing something pointless for an hour would generate a strong negative attitude regarding the task. Unless you have special neural wiring, it seems reasonable to assume that you would be bored by the task, but whether you would develop a strong negative attitude toward it seems questionable. After all, you are in a psych class, you're trying to learn something, and participation in an experiment is a course requirement. Anyway, after completing the boring task for an hour some of the subjects were asked to talk to someone introduced as another subject in the experiment but actually an actor, and try to persuade him that the task was interesting and engaging. Some subjects were paid $20; some were paid $1. (Today, you might get 4 pints of beers for $20; in 1959 you could probably get 100 pints of beer for $20. In other words, to most college students in 1959, $20 would have represented a small windfall. Consider, however, that these are Stanford students in 1959, many of whom may not have found much difference between $1 and $20.) One group of subjects was used as a control; these subjects weren't asked to talk to anybody about the task.
At the end of the study, the subjects were asked to rate "how enjoyable" the boring tasks were on a scale of -5 to +5. The average rating for the 20 students in the control group was -.45; the average for those paid $20 was -.05; and the average for those paid $1 was +1.35.
This was explained by Festinger and Carlsmith as evidence for cognitive dissonance. The researchers theorized that people experienced dissonance between the conflicting cognitions, "I told someone that the task was interesting" but "I actually found it boring." When paid only $1, students were forced to internalize the attitude they were induced to express, because they had no other justification. Those in the $20 condition, however, had an obvious external justification for their behavior, and thus experienced less dissonance.*The difference in results might also have been a fluke. The eleven students whose data was not included were rejected for a variety of reasons, but none of them was rejected because he was an outlier. With a small group of only 20 students being averaged, a couple of outliers would skew the average. I'm not saying that is what happened in the $1 group, but just a couple of high ratings could account for the higher average than the other two groups. On the other hand, the difference in ratings might be due to something besides cognitive dissonance. Maybe it was due to psychic influence from a paranormal lab across the country. Unlikely, sure, but the authors are just assuming the different ratings can be explained by what they were trying to establish. I don't know why the $1 group rated the boring task as significantly more enjoyable than the other two groups, but I'm not convinced it had anything to do with cognitive dissonance.
Consider also that when the subjects were asked how much they learned on a scale of 0-10, the groups rated themselves about equally at about 3. If the $1 group had rated their learning at 5, would that have been taken as evidence of cognitive dissonance? The stat I find the most interesting, however, is the one regarding whether the subjects would participate in a similar experiment in the future. None of the groups was very enthusiastic about doing so, but the $1 group was significantly more willing to do so that the other two groups. On a scale of -5 to +5, the $1 group averaged +1.2, while the control and $20 groups averaged -0.62 and -.025 respectively. Again, an outlier or two in the $1 group might be the main reason for the difference in averages. Or there might be some other reason. With such a small sample, it would seem reasonable to suspect that there might be some other difference between the $1 group and the others that has nothing to do with cognitive dissonance. In any case, even if this study were redone with the same results using 600 subjects, I would still question whether the differences should be explained by cognitive dissonance. Paying people a little bit of money to do a trivial task and then lie about it to someone else might not require any justification in the context of a psychology experiment at Stanford University. After all, it's just an experiment. Paying people a lot of money may have created less incentive by making the task less enjoyable. A token payment may have created the illusion that the subjects were making an important contribution to science.
Also, as we learn more about the
fundamental tendency of human behavior to be irrational much of the
time, is there really a need for a theory like cognitive dissonance to
explain why human beings are influenced to do or believe the things they
do? I assume most Christians believe that 1 + 1 + 1 = 3, yet many of
them believe that Abraham's god is one being but three persons. They
also believe that the divine nature transcends anything in the natural
world and is incompatible with human nature, yet many believe that Jesus
was both a god and a man. Finally, Catholics know that if something has all
the properties of bread or wine, it would be absurd to say either is a
duck or a train; yet, they believe that some bread and some wine look
like bread and wine but are actually the body, blood, soul, and divinity
of Jesus. None of these folks seem the least bit bothered
psychologically by these contradictory beliefs.
Finally, there must be many survivors of
the 9.0 earthquake and consequent tsunami that devastated Japan on
March 11, 2011, who believed in the basic goodness of a god or of nature
before that date. What predictions about the beliefs of these people
does cognitive dissonance make? And how would a social scientist tease
out the discomfort they must feel that is due to what happened to them,
their loved ones, and their neighbors and the discomfort that is due to
cognitive dissonance? Would an fMRI help separate various forms of
psychological discomfort? Am I criticizing hundreds of social scientists
because I am made psychologically uncomfortable by their theory since
it conflicts with what I believe to be true? Am I relieving my cognitive
dissonance by rejecting the concept of cognitive dissonance? And was I
kind to my father not because I loved him but because of the cognitive
dissonance I felt due to an Oedipus complex? Would an fMRI settle the
question?
This comment has been removed by the author.
ReplyDeleteto my mind, the problen of motivation captures a lot of scientists! url proposes some books of famous authors about motivation!
ReplyDeleteDo you realize there is a 12 word sentence you can speak to your man... that will trigger deep emotions of love and instinctual attraction for you deep within his heart?
ReplyDeleteBecause deep inside these 12 words is a "secret signal" that fuels a man's impulse to love, idolize and care for you with his entire heart...
12 Words That Trigger A Man's Desire Instinct
This impulse is so built-in to a man's brain that it will make him work better than ever before to make your relationship the best part of both of your lives.
Matter-of-fact, triggering this mighty impulse is absolutely binding to having the best ever relationship with your man that the second you send your man one of the "Secret Signals"...
...You'll immediately notice him expose his mind and heart to you in such a way he haven't experienced before and he will identify you as the only woman in the galaxy who has ever truly understood him.
DR EMU WHO HELP PEOPLE IN ANY TYPE OF LOTTERY NUMBERS
ReplyDeleteIt is a very hard situation when playing the lottery and never won, or keep winning low fund not up to 100 bucks, i have been a victim of such a tough life, the biggest fund i have ever won was 100 bucks, and i have been playing lottery for almost 12 years now, things suddenly change the moment i came across a secret online, a testimony of a spell caster called dr emu, who help people in any type of lottery numbers, i was not easily convinced, but i decided to give try, now i am a proud lottery winner with the help of dr emu, i won $1,000.0000.00 and i am making this known to every one out there who have been trying all day to win the lottery, believe me this is the only way to win the lottery.
Contact him on email Emutemple@gmail.com
What's app +2347012841542
Website Https://emutemple.wordpress.com/
Https://web.facebook.com/Emu-Temple-104891335203341
My dad was diagnosed with Parkinson's disease his symptoms were shuffling of feet,slurred speech, low volume speech, degradation of hand writing, horrible driving skills, right arm held at 45 degree angle, but now he finally free from the disease with the help of total cure from ULTIMATE LIFE CLINIC, he now walks properly and all symptoms has reversed, he had trouble with balance especially at night, getting into the shower and exiting it is difficult,getting into bed is also another thing he finds impossible.we had to find a better solution for his condition which has really helped him a lot,the biggest helped we had was ultimate life clinic they walked us through the proper steps,am highly recommended this www.ultimatelifeclinic.com to anyone who needs help.
ReplyDelete