Monday, April 30, 2012

experimenter effect

The experimenter effect is a bias in a scientific study of human or animal subjects due to the experimenter giving cues or signals to subjects in the experiment that affects their performance or response. The cues may be subtle, unconscious, nonverbal cues, such as muscular tension or gestures. They may be vocal cues, such as tone of voice or a slight difference in oral instructions given to control and experimental groups. They may be suggestions provided by informing the subjects of what to expect during the study. Research has demonstrated that the expectations and biases of an experimenter can be communicated to experimental subjects in subtle, unintentional ways, and that these cues can significantly affect the outcome of the experiment (Rosenthal 1998). Something similar happens in non-experimental settings. For example, when questioning witnesses or victims of crimes, investigators must be careful not to unwittingly suggest answers by nodding approvingly or praising some answers but not others. The interviewer shouldn't praise or show disapproval of any answer in a fair, unbiased interrogation.

In an earlier post on the priming effect, I mentioned a study by Doyen et al. that attempted to replicate earlier work by Bargh et al. that had found "participants for whom an elderly stereotype was primed walked more slowly down the hallway when leaving the experiment than did control participants, consistent with the content of that stereotype." In their study, Doyen et al. led half the experimenters "to think that participants would walk slower when primed congruently and the other half was led to expect the opposite." Only the subjects given instructions by the experimenters who understood that the test was to see if words associated with aging would prime subjects to mimic a stereotypical behavior of the aged showed the "walking speed effect."

Since most of us don't do experiments, my concern in this blog entry isn't to help experimenters design less biased experiments, but to help those of us who read accounts of those experiments either in scientific journals or in media accounts. What should we look for to determine whether experimenter bias has significantly affected the outcome of a study? And, when evaluating a journalist's account of a scientific study, are any hints of experimenter bias given?

Recently, I attended a conference sponsored by two skeptics' groups. The first speaker at the conference talked about the neurology of religious experiences. She brought up the work of Michael Persinger, a cognitive neuroscience researcher at Laurentian University in Sudbury, Ontario, Canada. She claimed (and so have many others) that Persinger has induced strange feelings--such as the "feeling of a presence" and other feelings sometimes described as "mystical" or "spiritual"--by sending low level magnetic pulses to the temporal lobes. He has his subjects put on a device that has been dubbed "the god helmet" while they sit alone in a darkened, silent room for 30-60 minutes. Persinger has been conducting these experiments over a period of at least fifteen years. He has tons of data and many published papers in peer-reviewed journals. However, I knew that Richard Dawkins had put on the god helmet and sat in the makeshift sensory deprivation chamber without feeling the presence of anything unusual except for the helmet on his head. Dawkins and others have speculated that Persinger's subjects are having experiences that are induced not by magnetic pulses to the temporal lobes but by the power of suggestion and expectation, and a desire for a weird experience. They know what the experiment is about; they long to experience something "spiritual," or Persinger suggests what they will experience and then they do, thanks to his suggestion.

When I suggested that we should be skeptical of Persinger's work because it may be suggestion not magnetic pulses to the temporal lobes that was inducing strange feelings, Dr. Sarah Strand proceeded to lay out the evidence in support of Persinger and against the alternative explanation. Unfortunately, the only evidence she supplied came from Persinger himself and his own analysis of his data. Persinger would not be the most disinterested, unbiased party in such an evaluation. Rather than describe experiments where subjects had no idea what to expect, where some were clearly expecting to experience something weird but Persinger gave them no magnetic pulses at all, or a host of other kinds of experiments that would have ruled out experimenter bias and clearly shown that it was the magnetic pulses that were causing the weird feelings, we were told that Persinger had done some sort of reanalysis of the data he's collected over the years.

Furthermore, what would have clearly ruled out experimenter bias would have been reference to other studies done in other labs by other researchers who had done double-blind, randomized controlled studies that clearly ruled out suggestion or some aspect of the quasi-sensory deprivation chamber experience as a cause and isolated the magnetic pulses as the most significant factor in the inducement of such things as the "feeling of a presence." Unfortunately, Dr. Strand didn't cite any other studies. Why? Because they don't exist.

The only other scientist who has tried to replicate Persinger's work was Pehr Granqvist of Uppsala University and his research team. In a double-blind, controlled study with 90 participants, they found that magnetic pulses had no discernible effect. They did find, however, many subjects from both groups claimed to have had strong religious experiences during the sessions. "Two out of the three participants in the Swedish study that reported strong spiritual experiences during the study belonged to the control group, as did 11 out of the 22 who reported subtle experiences." Persinger argued that the replication failed because the magnetic pulses had not been strong enough or given over a long enough period, which seems absurd given that so many subjects in both the control and experimental groups reported strong or subtle effects.

If it turns out that Granqvist's study is what other labs with no special interest in the outcome continue to find, then Michael Persinger has been deluding himself and others for fifteen years. He would not be the first Ph.D. to have done so. Nor would he be the first to have tainted his experiments with unintentional bias. (If the reader is wondering why Persinger would think stimulating the temporal lobes would induce a "spiritual" experience, it is probably because there have been many reports of those with temporal lobe epilepsy experiencing such things as "oneness with everything."* For more on "spiritual" experiences associated with temporal lobe epilepsy, see V.S. Ramachandran, Phantoms in the Brain, 1998.)

We should also note that it is common for some scientists and journalists to falsely and unjustly accuse other scientists of experimenter bias when the scientists' experiments contradict the accuser's beliefs. It does seem to be a fact that parapsychologists who are skeptical usually get negative results in their psi studies, while believers in psi often get positive results. One important exception is Susan Blackmore, who, while a true believer, continually got negative results and left parapsychology because of it. She's turned her attention to other matters, including trying to figure out why people believe in psi when the evidence for it is so flimsy. In any case, a skeptic (Richard Wiseman) and a true believer (Marilyn Schlitz) explored the experimenter effect while doing a joint study on the staring effect.  In "Experimenter effects and the remote detection of staring" the authors describe their attempt to do a joint study on the staring effect:
Both authors of the present paper previously attempted to replicate this staring effect. The first author (R. W.) is a skeptic regarding the claims of parapsychology who wished to discover whether he could replicate the effect in his own laboratory. The second author (M. S.) is a psi proponent who has previously carried out many parapsychological studies, frequently obtaining positive findings. The staring experiments carried out by R. W. showed no evidence of psychic functioning (Wiseman & Smith, 1994; Wiseman, Smith, Freedman, Wasserman, & Hurst, 1995). M. S.'s study, on the other hand, yielded significant results (Schlitz & LaBerge, 1997).
Even though the authors designed the experiments together, the skeptic got negative results and the psi proponent got positive results. They offer several possible explanations for the difference in their results. I encourage the reader to review their alternative explanations. One explanation they don't seem to consider is that the difference in results could have just been a fluke. More joint experiments by skeptics and psi proponents might resolve this issue, but there is so much hostility between parapsychologists and skeptics that cooperation like that of Schlitz and Wiseman is rare. In any case, the charge of experimenter bias should be ignored, whether made by a skeptic or a psi proponent, unless it is backed up by specific evidence that bias has likely occurred. Claims that the beliefs of skeptics and psi proponents affect the telepathic or precognitive abilities of subjects are pure speculation and beg the question.

Monday, April 23, 2012

halo effect

The halo effect refers to a bias whereby the perception of a positive trait in a person or product positively influences further judgments about traits of that person or products by the same manufacturer. One of the more common halo effects is the judgment that a good looking person is intelligent and amiable.

There is also a reverse halo effect whereby perception of a negative or undesirable trait in individuals, brands, or other things influences further negative judgments about the traits of that individual, brand, etc. If a person "looks evil" or "looks guilty" you may judge anything he says or does with suspicion; eventually you may feel confident that you have confirmed your first impression with solid evidence when, in fact, your evidence is completely tainted and conditioned by your first impression. The hope that the halo effect will influence a judge or jury is one reason some criminal lawyers might like their clients to be clean-shaven and dressed neatly when they appear at trial.

The phrase was coined by psychologist Edward Thorndike in 1920 to describe the way commanding officers rated their soldiers. He found that officers usually judged their men as being either good or bad "right across the board. There was little mixing of traits; few people were said to be good in one respect but bad in another."* The old saying that first impressions make lasting impressions is at the heart of the halo effect. If a soldier made a good (or bad) first impression on his commanding officer, that impression would influence the officer's judgment of future behavior. It is  very unlikely that given a group of soldiers every one of them would be totally good or totally bad at everything, but the evaluations seemed to indicate that this was the case. More likely, however, the earlier perceptions either positively or negatively affected those later perceptions and judgments.

The halo effect seems to ride on the coattails of confirmation bias: once we've made a judgment about positive or negative traits, that judgment influences future perceptions so that they confirm our initial judgment.

Some researchers have found evidence that student evaluations of their college instructors are formed and remain stable after only a few minutes or hours in class.  If a student evaluated a teacher highly early on in the course, he or she was likely to rank the teacher highly at the end of the course. Unfortunately, for those teachers who made bad first impressions on the students, their performance over the course of the term would be largely irrelevant to how they would be perceived by their students. Some might think this shows how wonderful intuition is: students can perceive how good a teacher is within minutes or days of meeting her. On the other hand, the halo effect may be at work here. Also, the fact that the evaluations are similar at the beginning and end of the semester might indicate that there is something seriously wrong with the typical evaluation form. It may be measuring little more than likeability and the halo effect.

In The Halo Effect: ... and the Eight Other Business Delusions That Deceive Managers (Free Press 2009), Phil Rosenzweig writes:
Much of our thinking about company performance is shaped by the halo effect … when a company is growing and profitable, we tend to infer that it has a brilliant strategy, a visionary CEO, motivated people, and a vibrant culture. When performance falters, we’re quick to say the strategy was misguided, the CEO became arrogant, the people were complacent, and the culture stodgy … At first, all of this may seem like harmless journalistic hyperbole, but when researchers gather data that are contaminated by the halo effect – including not only press accounts but interviews with managers – the findings are suspect. That is the principal flaw in the research of Jim Collins’s Good to Great, Collins and Porras’s Built to Last, and many other studies going back to Peters and Waterman’s In Search of Excellence. They claim to have identified the drivers of company performance, but they have mainly shown the way that high performers are described.
The fact is, despite the success of books like From Good to Great, it is not logically justifiable to infer the goodness (or badness) of a company's strategy, values, or leadership based on the fact of the success or failure of the company. The reason is obvious. Many companies go from good to bad and don't change their strategy or their leadership. But there is a natural bias to attribute good qualities to management and leadership when a company is successful and to attribute bad qualities to management and leadership when a company is failing.

When the athletic team succeeds or fails, the coach is often assumed to be a genius or an idiot, but the same coach doing the same things year in and year out has some successful years and some years that are failures. Yet, many people are quick to give credit for the success of a team to the coach when the team wins, conveniently forgetting that the same coach led the team to a fifth place finish last year. The success or failure of the team leads many people to perceive qualities in the coach and his strategy, plan, preparation, work ethic, etc. One finds similar comments from TV golf announcers about various golf coaches. If a coach has several players who win tournaments or play well during a relatively short period of time, the coach is a genius. When his players fire him or quit winning, the same coach's methods are archaic and outdated, not suitable for the modern game.

Ronald Reagan was sometimes called the Teflon President, but he might more accurately have been called the Halo Effect President. The man had very few qualities that qualified him to be the leader of the free world, but he was the most likeable man on the planet. He exuded confidence and steadfastness as the good guy in the white hat. He was the John Wayne of politics. He could also be self-effacing when it suited him and his ability to deliver a story or a joke was second to none. He had little difficulty giving the impression that he knew what he was talking about when he told us that trees cause more pollution than automobiles or that government is the cause of all our problems. The public had little problem excusing the fact that while he was president Oliver North was selling arms to Iran and using the money to fund a group of terrorists...excuse me...freedom fighters in Nicaragua, unbeknownst to Congress or the American people. Reagan had a great sense of humor and you felt you could trust him with your only offspring and he wouldn't cheat you at cards. But his reputation for being a great leader is mainly conditioned by his personality, his great speech writers, and the "lamestream media." When people long for a leader like Reagan to reappear what they are really longing for is an actor who could put anyone at ease with his perceived authority, honesty, confident manner, wry smile, and his incomparable sense of humor.

Barack Obama benefited tremendously from the halo effect in his 2008 election campaign for president. His powerful speaking ability and the fact that he's highly educated and appears physically fit (though I understand he's a smoker) led many people to assume he possesses many other fine qualities that would qualify him to be leader of the free world. The fact is that Mr. Obama had very little experience that would qualify him to be president of the United States. He was helped significantly, of course, in his election bid when his opponent, John McCain, chose Sarah Palin for a running mate, thereby undermining one of the main themes of his campaign: country first. There is little doubt that Sen. McCain has the right stuff to be president, but if there was anyone less qualified to be president than Mr. Obama, it was surely Ms. Palin. (She wasn't running for president, of course, but if McCain died in office, she would have become leader of the free world....a frightening thought, indeed.)

Anyway, it appears the Founders didn't think it would be too hard to be president since the only job qualifications are being born here at least thirty-five years before becoming president and living here for fourteen years. That's right. Most adults in this country meet the minimum qualifications to be president. That's comforting.

Monday, April 16, 2012

continued influence effect

"A radical environmentalist, a socialist, and an illegal alien walk into a bar. The bartender says: 'What you having, Mr. President?'"--Sam Aanestad, Republican candidate for Congress, warming up the crowd at a Tea Party Patriots debate
The ‘continued influence effect’ is short for ‘the continued influence of misinformation.’ The term refers to the way false claims enter memory and continue to influence beliefs even after they have been corrected. Unfortunately, many people do not understand how memory works. Worse, they have little interest in the science of memory. If a false claim fits with beliefs that more-or-less define a person's worldview and has a strong emotional component, they instinctively accept the false claim rather than investigate it as a critical thinker would. For example, a few people—whose motives we need not explore here—were able to manipulate the mass media to make a story out of the claims that Barak Obama was born in Kenya and is a Muslim. If these claims were true, Mr. Obama would not be eligible to be president of the United States. Actually, only the first claim--that he was born in Kenya--makes him ineligible. Despite the presentation of overwhelming evidence that President Barack Obama is a Christian and was born in Hawaii, many Americans continue to believe otherwise. On March 12, 2012, the results of a poll in Alabama and Mississippi found that among Republican voters about half still believe he is a Muslim. The importance that emotion and worldview have in affecting this erroneous belief is indicated by the fact that about 25% of those who think Obama is a Muslim also believe that interracial marriage should be illegal. (Obama's father was Kenyan, his mother was Irish.) One year ago, a national poll found that one-fourth of all Americans think President Obama was not born in the United States. Among Republicans and Tea Party supporters, 45% believe he was born in another country. It is very difficult to be fair and balanced in evaluating new information when one has a strong emotional attachment to beliefs that conflict with the new information.

Some people believed Vice-president Dick Cheney when he claimed: “There is no doubt that Saddam Hussein now has weapons of mass destruction; there is no doubt that he is amassing them to use against our friends, against our allies, and against us.” The good news is that many people changed their minds when provided with good evidence contrary to what Cheney had claimed. Several years after Cheney made his false claim and the evidence for it remained near zero, the percentage of Americans who accepted the falsehood about Saddam and weapons of mass destruction dropped from 36 to 26 percent. Still, one out of four Americans believing something false is not something to be proud of.

It should be obvious that most of us are not critical of claims that fit well with our prejudices and emotion-laden beliefs. Still, you would think that we would give up believing something once the evidence shows that we’re wrong, especially since most of us are encouraged in childhood to be truthful and honest. The science indicates otherwise. See, for example, my previous post on the backfire effect. Even without the science, most of know from experience that some nuts are nearly impossible to crack. One of the more obvious examples is religion. Last year, a Gallup poll found that 3 in 10 Americans take the Bible to be the literal word of the god of Abraham. Another 49 percent say the Bible is inspired by a god but should not be taken literally. When you are taught something from childhood that is continually reinforced by one’s family and other communities, it is very difficult to be fair and balanced in evaluating evidence that conflicts with those teachings. On the other hand, in areas where emotion is less dominating, when people are faced with overwhelming evidence contrary to what they believe, they correct their errors. This is what happens in science again and again, unlike what has occurred with fundamentalist religious believers.

It has long been known that false information can influence memory. Recent studies have found that correcting false information often has little effect on changing beliefs. Discredited information continues to influence reasoning and understanding even after one has been corrected. The backfire and continued influence effects should be disheartening to those who think that the first step in arguing with those who base their beliefs on misinformation should be to get their opponents to see what the facts are. Correcting errors is pointless when dealing with people who attribute their own beliefs to principled, unprejudiced inquiry, while attributing the beliefs of those who disagree with them to bias and ulterior motives. But even if a person admits that those who disagree with him have integrity and are really seeking the truth, you are probably wasting your time providing data and facts that might change his mind if the claim you are trying to correct challenges his gut feelings and core beliefs.

Critical thinkers want errors corrected. At the very least, getting the facts right might prevent some faulty inferences and prevent one from behaving in ways that could prove harmful. Is there any hope that those who tend to stick to their beliefs--no matter what the evidence--can change? Yes, there’s some hope, but it is very slight. A study by Ullrich Ecker, Stephan Lewandowsky, and David Tang found giving subjects detailed information about the continued influence effect reduced the reliance on outdated information but did not eliminate it. They also found that reminding people that facts are not always properly checked before information is published in the media didn’t have much effect on reducing the continued influence of misinformation. Holly M. Johnson and Colleen M. Seifert have argued that providing a plausible causal alternative, rather than simply negating misinformation, mitigates the continued influence effect ( “Sources of the continued influence effect: When misinformation in memory affects later inferences” ). They may be right for some beliefs, but I have not found that providing a causal alternative to astrologers, acupuncturists, homeopaths, parapsychologists, or defenders of applied kinesiology, for example, has had much effect on true believers. Political beliefs, religious beliefs, and conspiratorial beliefs seem impenetrable to facts that contradict them. Changes in these beliefs seem more likely to occur outside of direct confrontation with opponents.

One obviously important area where reliance on misinformation can be harmful is in the courtroom. Jurors’ reasoning is influenced by misinformation. Just warning them that something they’ve been presented with is false won’t necessarily prevent the false information from affecting their thinking. We like to think that such warnings would prevent our memories from being distorted, but the way memory ordinarily works is that it often instinctively draws on misinformation—even misinformation that we know is wrong because we’ve had it corrected. In addition to the Ecker et al. study and the Johnson and Seifert study, another important study has found evidence for the continued influence of corrected misinformation: Brendan Nyhan and Jason Reifler's “When Corrections Fail: The persistence of political misperceptions.”

So, is there anything that might change the minds of those who believe Obama is a Muslim born outside of the United States or that Saddam Hussein was behind 9/11 and had weapons of mass destruction before George W. Bush ordered an invasion of Iraq? I would argue that if a person is driven by emotions, especially fear, you probably have little chance of changing his mind. If, however, a person is not driven by emotion and is flexible and open to new information, you have a good chance of changing that person’s mind by providing accurate information backed up with reliable sources. What percentage of so-called birthers are not driven by emotion and are comfortable with changing their minds as new information becomes available? Figure that out and you will know the probable odds of your success at persuading a birther to change his mind. I’d say your odds of success are about the same as those of getting someone who considers “abortion the ultimate child abuse” to engage in a rational discussion of the moral and legal issues regarding abortion.

Finally, further complicating matters are the suggestions that some people's personalities and brains are structured in ways that make them nearly impenetrable to data that conflicts with what in their hearts they know to be true. Their belief armor, perhaps, makes them impervious to change unless one appeals directly to their emotions and gut feelings without challenging the core beliefs that define who they are. Data runs off the backs of some people who are moved to tears by an emotional story that is merely anecdotal. Authoritarian personalities who know what they know is true are driven by fear of "liberals" who are conspiring to take away their freedom and establish an atheistic and socialist state and are not going to be too open minded when it comes to things like Obama's citizenship or abortion issues. The belief armor is strengthened by the tendency of many people to seek out sources of misinformation, perhaps deluding themselves into thinking that they are truth seekers. But, hey, Stephen Colbert figured this out a long time ago.


Monday, April 9, 2012

clustering illusion

In 2003, the mother of a child with leukemia in the Sacramento area thought it odd that there were several other people with cancer in her neighborhood. She did a survey and found what appeared to be an excessive number of cancers in the area. I can understand the woman’s desire to find something to blame for her child’s illness and I can also understand how the average person might be led to think that there must be something in the local environment that is causing the cancers. However, several things should be considered.

The woman’s boundaries extended to wherever she arbitrarily decided to draw them. She included all kinds of cancers in her survey, not just leukemia. She included people who had lived in the area for various amounts of time, ranging from having been born there to having moved into the area in adulthood.

Epidemiologists tried to explain that when you looked at the numbers, they weren’t that unusual and didn’t warrant investigation into an environmental cause. The data showed no difference in the leukemia rates in her area than in the rest of the Sacramento region. The Sacramento Bee took up her cause and eventually there was an official government analysis of the water supply that found no environmental toxins in the water. Still, the Bee persisted and identified tungsten found in tree rings as the probable culprit, even though tungsten is not a known human carcinogen and the connection between tungsten in trees and cancer in humans is speculative.

What the Bee and their experts were calling "a cancer cluster" is considered an example of the clustering illusion by epidemiologists: the intuition that random events which occur in clusters are not really random events. To some, the occurrence of a number of cancers in a defined space cries out for a causal explanation in terms of some unknown environmental hazard. To others, familiar with the data and knowledgeable of proper statistical analysis, the same number of cancers occurring within the same defined space is expected by the laws of chance.

The Centers for Disease Control investigated 108 cancer clusters between 1961 and 1990. None could be linked with environmental causes. It may seem improbable, but the chances are better than even that a given neighborhood in California will have a statistically significant cluster of cancer cases.

The clustering illusion is due to selective thinking based on a counterintuitive but false assumption regarding statistical odds. For example, it strikes most people as unexpected if heads comes up four times in a row during a series of coin flips. However, in a series of 20 flips, there is a 50% chance of getting four heads in a row. In any short run of coin flips, a wide variety of probabilities are expected, including some runs that seem highly improbable.

Sometimes a subject in an ESP experiment or a dowser might be correct at a higher than chance rate over a limited period of time. However, such results do not indicate that an event is not a chance event. In fact, such results are predictable by the laws of chance. Rather than being signs of non-randomness, they are actually signs of randomness. ESP researchers are especially prone to take streaks of "hits" by their subjects as evidence that psychic power varies from time to time.

A classic study done on the clustering illusion demonstrates just how hardheaded we are when it comes to facing facts that don’t support our beliefs. The study was done by Thomas Gilovich and some colleagues. It centered on the belief in the “hot hand” in basketball. It is commonly believed by basketball players, coaches, and fans that players have “hot streaks” and “cold streaks.” A detailed analysis was done of the Philadelphia 76ers during the 1980-81 season. It failed to show that players hit or miss shots in clusters at anything other than what would be expected by chance. Gilovich et al. also analyzed free throws by the Boston Celtics over two seasons and found that when a player made his first shot, he made the second shot 75% of the time and when he missed the first shot he made the second shot 75% of the time. Basketball players do shoot in streaks, but within the bounds of chance. It is an illusion that players are ‘hot’ or ‘cold’. When presented with this evidence, believers in the “hot hand” are likely to reject it because they “know better” from experience.

Monday, April 2, 2012

intentionality bias

Intentionality bias refers to the tendency to see intentions in the movements of both animate and inanimate objects. This bias serves us well in most interactions with purposive agents, such as other humans, but even then we often see intentionality or purposiveness where there is none. A drunk bumps into us at the bar and spills his drink on our back. We're sure he did it on purpose, though it may well have been an accident. 

In ambiguous situations, some people might view an act as unintentional, while others see it as intentional. Your sister helps clear the dishes after dinner and drops a cherished serving dish you brought back from a foreign country, shattering it into a dozen shards. Everyone else accepts her apology for the accident. You're sure she did it on purpose to get back at you for some slight she may have felt during the course of the mealtime conversation.

Most adults who have learned the basics of science are likely to see processes in the natural world--such as thunderstorms, earthquakes, or the eruption of volcanoes--in mechanistic terms. We don't think any intentional agents bring about such events. We learned in school, however, that many of our ancient ancestors perceived the natural world as full of "spirits" or invisible intentional agents. It seems likely that intentionality bias emerged with the evolution of the earliest humans. Several studies on intentionality bias in children indicate that a natural way of perceiving and making sense out of the natural world is to see intentional agents behind the movements of many things that adult scientists attribute solely to mechanistic forces. Intentionality bias in children has led Justin L. Barrett, a psychologist at Fuller Theological Seminary, to claim that we are "born believers" in religious claims and that "religion comes nearly as naturally to us as language." (For more on Barrett and his book Born Believers, see The Daily Beast.)

I would argue that all that we're justified in inferring from the natural bias toward perceiving intentional agents as behind the movements of both animate and inanimate objects is that it is natural to think anthropomorphically about natural events and that it's natural to think that others like us have intentions like we do. I think it is a long way from the intentionality bias to religion of any kind, though clearly some religions have taken advantage of this bias in promoting their beliefs. I don't think it is inevitable, though, that seeing agents behind weather patterns or geologic events must lead to personifying those agents into beings like ourselves, only better. Attibuting good looks, immortality, perfect health, and magical powers to gods may have been the next step for many early human societies, but was it inevitable that that step be taken? Once that step was taken, was it inevitable that humans would start trying to control these agents with bribes of virgins, burnt meat, and sizzling rice soup (ok, I made up the latter, but different cultures offered up different things to their gods based on local tastes). I don't think so, nor do I think it was inevitable that monotheism or belief in an agent who is bodiless but powerful enough to create the universe by an act of will would emerge and supplant polytheism. There are some logical gaps in moving from seeing intentions in a thunderstorm to seeing a "pure spirit" whose intentions created the entire universe and to building worship houses to appease and honor this supreme being. I think it is a long way from seeing purpose in an earthquake to seeing earthquakes as direct acts by a supreme being to harm creatures that exist only because the being wills it. There is no necessary connection between seeing agents everywhere in nature and seeing everything in nature and human society occurring only because some invisible being with extreme powers wills it. Also, the leap to making claims that some of these invisible agents communicate with humans in dreams or visions, revealing the gods' intentions, wasn't inevitable. I would not say that intentionality bias has led inevitably to claims that one's life and ambitions are part of some god's plan. But without belief in intentionality, no gods would likely have been created by humans. (For those who like their logic straight, intentionality is a necessary but not a sufficient condition for belief in gods.)

Clearly, the intentionality bias is stronger in some people than in others. Combined with the human need for significance and meaning, we have at one extreme people who see everything as purposive. Nothing happens by accident. Even accidents have a meaning. An invisible supreme being not only watches over everything that happens, the being doesn't let anything happen except according to plan. At the other extreme from those who think somebody's in control of everything are those who see no intentionality at all in the natural world and who see all human acts as determined by causes, even intentional acts. Many people see no need for gods, spirits, or any other invisible intentional agents to explain the natural world. In the social world, there seem to be some people who find it very difficult to "read the minds" of other humans, but who have little difficulty in "reading the minds" of cattle or other animals. Temple Grandin, who was diagnosed as autistic as a child and who says that today she would be diagnosed with Asperger syndrome, comes to mind. She attributes at least part of this ability to empathize more with cattle than with humans to the fact that she thinks in pictures. There has been some research that suggests a deficit in the intentionality bias is related to autism and Asperger syndrome.

The tendency of early humans and modern children to see intentional agents behind mechanistic processes may be an expression of an essential adaptation for social creatures such as humans. The inability to perceive the intentions of others is a major hindrance to social development. It is obvious that the intentionality bias is a necessary condition for perceiving the intentions of others. If you can't perceive intentionality, you certainly can't perceive the specific intentions of others.

The advantages of intentionality bias to social beings are many. Cooperative collective action may be fine for ants and termites functioning mechanistically and without conscious regard for the intentions of their comrades in building or carrying supplies. But the scope of their actions is severely limited by the inability to perceive purposiveness in each other's behaviors. Instinctive reactions to others, unaided by a natural tendency to perceive their intentions, would have kept our species from evolving into the creatures that built pyramids, aqueducts, skyscrapers, and, yes, cathedrals. The inability to determine whether another animal's intentions are benign or malevolent would be a great disadvantage to any mammal.

Some have linked mirror neurons to the intentionality bias. Mirror neurons discharge both when a person (or monkey) executes a motor act and when it observes another individual (a human being or another monkey) performing the same or a similar motor act. The following abstract might help clarify the idea that mirror neurons are linked to intentionality bias:
Our social life rests to a large extent on our ability to understand the intentions of others. What are the bases of this ability? A very influential view is that we understand the intentions of others because we are able to represent them as having mental states. Without this meta-representational (mind-reading) ability their behavior would be meaningless to us. Over the past few years this view has been challenged by neurophysiological findings and, in particular, by the discovery of mirror neurons. The functional properties of these neurons indicate that intentional understanding is based primarily on a mechanism that directly matches the sensory representation of the observed actions with one's own motor representation of those same actions. These findings reveal how deeply motor and intentional components of action are intertwined, suggesting that both can be fully comprehended only starting from a motor approach to intentionality.

The tendency to infer intentionality in the behavior of others has been the subject of much study. There have been several experiments with both children and adults that have shown that both have little difficulty in seeing the movement of computer generated colored shapes as intentional. I see several benefits stemming from such research. One is to determine how strong the intentional bias is in people; the other is to try to help those who suffer socially because they do not have a strong intentional bias. Another benefit is that such research helps establish just how natural and strong the tendency to find causal relationships, even misguided intentional relationships, is in human beings. Intuitively, an adult might see one triangle as "chasing" a circle, but on reflection most adults recognize that chasing requires intentionality and triangles aren't intentional agents. Intuitively, one might perceive invisible agents guiding natural processes or watching over what happens to you, but on reflection adults should recognize the implausibility of these agents actually existing. After all, if you found yourself unable to recognize intentionality in others, the last place you'd think to look to rectify this problem would be for intentional agents controlling the situation.