Monday, June 25, 2012

positive-outcome bias

Positive-outcome bias is is a type of publication bias: the tendency for journals and the mass media to publish articles about scientific studies that have a positive statistical outcome. (You might consider a study that finds a strong correlation between not smoking cigarettes and not getting lung cancer to have a negative outcome, but a study that finds a significant statistical correlation--that is, a correlation not likely due to chance--between not smoking and not getting lung cancer is a positive-outcome study.) Studies that find nothing of statistical significance or of possible causal consequence often don't get published. Because of various types of publication bias, the scientific community and the general public are often presented with a skewed and biased view of findings by scientific researchers.

Researchers who find nothing of statistical significance in small studies usually do not present their findings at scientific meetings nor do they submit their work to journals. Such behavior is known as the file-drawer effect, since these studies get filed rather than submitted for publication. Large studies--whether observational or control group studies--will usually get published unless they have some obvious methodological flaw. Small studies are more susceptible to statistical flukes than large studies, other things being equal. The most common statistical formula used in the social sciences and medical studies considers a statistic significant if there is only a 5% chance of it being a fluke. Statistical significance does not mean that a statistic is important; it means that according to some formula the statistic is not likely due to chance, i.e., not likely a fluke. The smaller the sample in a study, the greater the chance of finding statistical significance when a larger study would find nothing of statistical significance. Also, the smaller the study the greater the chance of missing a correlation that a larger study would find at some level of statistical signficance. The latter situation will occur with more frequency with small correlations. Again, statistical significance does not mean important. It may be true that your study had a sample of 18,000 and found a statistically significant difference in heart attacks between subjects taking a dummy pill and subjects taking rosuvastatin (to lower cholesterol), but that doesn't mean the difference is important. (The difference was 0.2 events per 100 person years. What the side effects of taking the statin over a period of years might be is unknown, but they might outweigh the small benefit of taking it.)

Publishing large studies that have positive but unimportant outcomes is one way that scientific journals can bias the information the mass media filters for our consumption. Another way is to publish small studies with positive outcomes and expect journalists and the general public to recognize that nothing much should be made out of a single small study.

But even when scientists do submit work with negative findings--such as not finding any evidence for precognition--their work is often rejected simply because it is not positive. A recent example of this bias occurred with the Journal of Personality and Social Psychology, a journal of the American Psychological Association (APA). In 2011, this journal published work by parapsychologist Daryl Bem [Vol 100(3), Mar 2011, 407-425] that purported to find positive evidence in support of precognition ("Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect"). When scientists Stuart J. Ritchie, Richard Wiseman, and Christopher C. French submitted a paper that replicated the best of Bem's work but which resulted in no evidence for precognition, the journal refused to consider the research paper for publication. (A study is considered a 'replication' of another study if it replicates the methods of that study, regardless of the results found in the new study.) They were told that the journal does not publish replications. We will never know what the journal would have done had the negative study been submitted first, but my guess is that it would not have been published because its results harmonize with what most psychologists take for granted: there is no precognition.  (The failed replication was published online at PLoS ONE and is called "Failing the Future: Three Unsuccessful Attempts to Replicate Bem's ‘Retroactive Facilitation of Recall’ Effect.")

Mass media articles about scientific work often mislead the public because they do not report on negative-outcome studies. (Again, I remind the reader that a negative-outcome study is one that finds nothing of statistical significance.) Worse, many of the studies covered by the mass media are small studies that should not be generalized from. The most outrageous recent example of turning a small study into a major catastophe by the mass media is the Andrew Wakefield report on 12 children. Wakefield claimed he found a connection between the MMR vaccine and developmental disorders. This led to members of the anti-vaccination movement using this report to incite a panic regarding vaccines and autism. Because of the concern raised over vaccines and developmental disorders, several large studies were conducted and they all failed to find evidence of a correlation between vaccines and developmental disorders. These studies were published and widely publicized by the mass media. Nevertheless, the damage had been done, and the subsequent reports in both scientific journals and the mass media have done little to quell the panic. Also, rather than change their minds about vaccines, the anti-vaccinationists have found many reasons to reject the studies that show their position is wrongheaded, thereby exemplifying the backfire effect.

One of the ways in which positive-outcome bias skews our understanding of the results of scientific research is in how it affects systematic reviews, such as those done by the Cochrane Collaboration. This large group of academics from around the world tries to examine all the scientific studies that have been done on a particular medical treatment, conventional or unconventional. Different studies are given different values depending on how they were designed, how large they were, etc. The group tries to determine in an unbiased way what the best evidence is for any particular treatment. But their work can be very misleading because often negative studies don't get published, which they admit:
Systematic reviews aim to find and assess for inclusion all high quality studies addressing the question of the review. But finding all studies is not always possible and we have no way of knowing what we have missed. Does it matter if we miss some of the studies? It will certainly matter if the studies we have failed to find differ systematically from the ones we have found. Not only will we have less information available than if we had all the studies, but we might come up with the wrong answer if the studies we have are unrepresentative of all those that have been done.
We have good reason to be concerned about this, as many researchers have shown that those studies with significant, positive, results are easier to find than those with non-significant or 'negative' results. The subsequent over-representation of positive studies in systematic reviews may mean that our reviews are biased toward a positive result.
The value of scientific studies is often measured by how many scientists make reference to the study. Citation bias, an inevitable consequence of positive-outcome bias, magnifies the skewing problem. The Cochrane Collaboration uses the funnel plot to estimate the signficance of positive-outcome bias in a systematic review. "It assumes that the largest studies will be near the average, and small studies will be spread on both sides of the average. Variation from this assumption can indicate publication bias." Of course, if large studies with negative results are stuck in the file drawer, the funnel plot will be misleading. This is not likely to happen as long as the studies are methodologically sound. But, some of these large studies may mislead us into thinking that some small difference, though statistically significant, is important when it isn't.

A study that tried to measure positive-outcome bias in peer review of a medical journal found small but signficant ways in which reviewers evaluated positive and negative studies.




Monday, June 18, 2012

selection bias

Selection bias is selecting a sample that is likely to favor one proposition over another. For example, by asking only those who have bought Chevrolets which car is better, Chevy or Ford, you will most likely get overwhelming support for Chevy. By asking only those who have made a full recovery after back surgery whether back surgery is a good option for people with back problems, you are likely to get overwhelming support for the surgery. If you ask only those whose back problems continued after the surgery, you are likely to get an overwhelming response against the surgery.

Selection bias partly explains why there are reports of many satisfied customers who go to psychics, tarot card readers, palm readers, faith healers, acupuncturists, homeopaths, and others who provide bogus treatments such as mistletoe for cancer. The unsatisfied customers are either not asked for their opinion, they're too embarrassed to give it, or they're dead.

Edzard Ernst, M.D., who was trained in various non-conventional medical therapies, provides an example of selection bias that occurred while he was studying the therapeutic effects of mistletoe injections on cancer patients. He was told that the effect would be a lessening of suffering.

Whenever I gave mistletoe injections, the results seemed encouraging. But young doctors are easily impressed, and I was no exception. What I didn't appreciate then was a relatively simple phenomenon: the hospital where I worked was well known for its approach across Germany; patients went there because they wanted this type of treatment. They were desperate and had very high expectations - and expectations can often move mountains, particularly in relation to subjective experience and symptoms. We call this "selection bias". It can give the impression that a therapy causes a positive health outcome even when it has no positive action of its own.

Patients receiving questionable treatments or clients seeking advice from questionable soothsayers are highly motivated to be helped and to have the healer or reader succeed. Such people are often extremely generous in their efforts to personally validate the words, images, or advice of the reader/healer. Some will even assent to claims they know are false, as one of Gary Schwartz's subjects did with a medium who got the subject to agree that her husband was dead when in fact he was still alive. Schwartz engaged in selection bias again when he omitted much of his data in published papers in the Journal of the Society for Psychical Research that supported the hypothesis of survival of consciousness after death. In his book, The Afterlife Experiments, he describes numerous subjects in his experiments who are conspicuously not mentioned in the published papers on those experiments. Rupert Sheldrake showed selection bias when he omitted 40% of his data in a study claiming to provide statistical evidence for the psychic abilities of a parrot.

The best way to avoid selection bias regarding questionable treatments and various divination techniques is to randomize samples, use control groups, and double-blind experiments. The best way to reduce selection bias by scientists is to expose its occurrence and publicly chastise offenders.

Skeptics and parapsychologists have accused each other of selection bias in determining which studies to include in the ganzfeld meta-analysis. Skeptic Ray Hyman did the first meta-analysis of 42 ganzfeld experiments and found no evidence of ESP. Parapsychologist Charles Honorton, on the other hand, found evidence of "anomalous information transfer." In 1994, Daryl Bem and Honorton published the results of a meta-analysis of 28 ganzfeld studies and once again found evidence for anomalous information transfer. In 1999, Julie Milton and Richard Wiseman published their own meta-analysis of ganzfeld studies and concluded that "the ganzfeld technique does not at present offer a replicable method for producing ESP in the laboratory." Much of the disagreement in analysis centered on what criteria to use in deciding which studies to select for the meta-analysis.

selection bias in polls and surveys

Researchers can bias the results of polls and surveys by using a biased method of selecting subjects for their study. Selecting subjects from a non-representative section of a population is a common way to bias a sample. Using samples that are too small to be representative is a frequent error made by researchers.
Misconceptions of chance are not limited to naive subjects. A study of the statistical intuitions of experienced research psychologists revealed a lingering belief in what may be called the “law of small numbers,” according to which even small samples are highly representative of the populations from which they are drawn. The responses of these investigators reflected the expectation that a valid hypothesis about a population will be represented by a statistically significant result in a sample with little regard for its size. As a consequence, the researchers put too much faith in the results of small samples and grossly overestimated the replicability of such results. In the actual conduct of research, this bias leads to the selection of samples of inadequate size and to overinterpretation of findings. Kahneman, Daniel (2011-10-25). Thinking, Fast and Slow (pp. 422-423). Macmillan. Kindle Edition.
Alfred C. Kinsey’s famous studies on sexual behavior in the 1950s have been repeatedly cited as the basis for the claim that 10% of the population is gay. This statistic has been widely cited in both the mass media and in scientific publications, though it is based on a biased selection of samples. Numerous studies have been done since Kinsey's work was published and these later studies put the percentage of adults who describe themselves as exclusively gay as much lower than the 10% figure. Some have found the rate to be between one and two percent.* It should be noted, however, that "survey research methodologies often result in underreporting of stigmatized behaviors."*

Kinsey gathered his data, in part, by distributing questionnaires to prisoners and to people who attended his lectures on sexuality, neither of which was likely to be a good cross section of Americans (Carroll 2005: 140). For his studies on male sexuality, "he interviewed only white men, and these respondents were disproportionately from lower socioeconomic classes."*

A study that considered attraction to the same sex in measuring homosexuality found "8.7, 7.9, and 8.5% of males and 11.1, 8.6, and 11.7% of females in the United States, the United Kingdom, and France, respectively, report some homosexual attraction but no homosexual behavior since age 15."*

In 1994, sociologist Edward Laumann headed a team of sociologists that studied U.S. sexual behavior. They interviewed a representative sample of the U.S. population between the ages of 18 and 59. Laumann found that over a five-year period, 4.1 percent of U.S. men and 2.2 percent of U.S. women had sex with someone of their own sex. If the time period is extended to include their entire lives, these totals increase to 7.1 percent of the men and 3.8 percent of the women.*

Paul and Kirk Cameron reported in 1998: "The 1994 University of Chicago 'definitive' survey of adults estimated prevalence of homosexuality among males at 2.8% and among females at 1.4%. Corrected for the exclusion of those over the age of 59 years, the estimates should be 2.3% and 1.2%."* A study in Britain in 2000 found that about 2.6% of men and women reported having had a same-sex partner within the previous five years and 8.4% of the men and 9.7% of the women reported having had at least one sexual experience with a member of the same sex.*

One wonders, however, if anything approaching unbiased data is possible for determining what percentage of any human population is homosexual. Given the long history of religious prohibition of homosexuality and the widespread revulsion of homosexual behavior that has often led to torment and persecution, it is likely that researchers in this area will be motivated by something other than a genuine search for the truth. Results will differ depending on how one defines 'gay,' 'lesbian,' and 'homosexual'. Methods of gathering data samples will vary widely and the participants in such studies may not be highly motivated to reveal much about their sex lives.

There is some irony in the fact that the Kinsey studies are cited as the source of the statistic that 10% of the population is gay. As Michael Shermer notes, Kinsey made it clear that he did not believe human males "represent two discrete populations, heterosexual and homosexual." Kinsey maintained that "it is a fundamental of taxonomy that nature rarely deals with discrete categories. Only the human mind invents categories and tries to force facts into separate pigeon-holes" (Shermer 2005: 246). Nature has a bias toward variation. The idea that people should fall into neat categories as 'gay' and 'straight', or even 'male' and 'female', is not consistent with the lessons of evolution. Any study that creates such false dichotomies will be misleading.

Monday, June 11, 2012

post hoc fallacy

An astrologer asks "Do you think it was a coincidence that the tsunami hit Japan just a few days after the moon was closer to the Earth than it has been in years?" He thinks that the moon somehow caused the tsunami. Why? I don't know, but it is true that the one came after the other. (An astronomer asks, who cares what an astrologer thinks about the moon and earthquakes?)

A dowser finds water or a golf ball after using his dowsing rod. He claims the rod led him to the find. Did it? I doubt it, but it is true that one thing came after the other.

A gambler blows on the dice before he rolls them. They come up a winner. He thinks his blowing on the dice affected the outcome of the roll. Did it? Probably not, but it is true that one thing came after the other.

A woman claims that a vaccine caused her child's autism. Why? I don't know, but it is true that the diagnosis came after the shot.

A man claims that his knee pain diminished significantly after receiving acupuncture treatment. What caused his pain to lessen? I don't know, but he thinks it was the acupuncture.

A woman's headache went away after taking a homeopathic potion for headaches. Why did her headache go away? I don't know, but she thinks it was due to the homeopathic pill. It is true, though, that the one came after the other.

Desiree Jennings was a young cheerleader when she became the poster child for the anti-vaccination movement based on her claim that a flu shot caused her dystonia. Her evidence? She started showing symptoms ten days after she got the shot. She once had her own website (www.desireejennings.com) where she wrote:
On August 23, 2009, I received a seasonal flu vaccine at a local grocery store that drastically, and potentially irreversibly, altered my future. In a matter of a few short weeks I lost the ability to walk, talk normally, and focus on more than one stimuli [sic] at a time. Whenever I eat I know, without fail, that my body will soon go into uncontrollable convulsions coupled with periods of blacking out.
Each day is a battle to control the symptoms triggered by the flu vaccine and a reminder that my life will never be the same. I set up this site to tell my story and warn people of the neurological side effects than can result from vaccinations; especially knowing that in the majority of cases, these stories are seldom heard outside of immediate families and friends.
I hope everyone that reads my story will heed my warning and think very carefully, including seeking out consultations with your family doctor, before making the decision to receive a vaccination.
Jennings claims that about ten days after she received the seasonal flu vaccine, she developed a severe respiratory illness that required hospitalization. Shortly after that she had difficulty speaking and walking, with involuntary muscle contractions and contortions. Her symptoms were relieved, she claimed, by walking backward or by running.

There is no known way that the flu (which is what probably hospitalized her) or a flu vaccine could cause dystonia and there is not a single case in the medical literature of such a thing ever happening. Still, there is always a first time, I suppose. But getting bogged down in that discussion is a red herring because it is very unlikely that Jennings suffered from dystonia, much less that the flu vaccine caused her symptoms.

The post hoc ergo propter hoc (after this therefore because of this) fallacy is based on the mistaken notion that because one thing happens after another, the first event was a cause of the second event. Post hoc reasoning is the basis for many superstitions and erroneous beliefs. The examples of poor causal reasoning listed above were each probably combined with preconceived ideas about such things as a causal connection between astronomical events and tsunamis, dowsing and finding things, superstitious actions and outcomes on dice or cards, vaccines and autism or other disorders, acupuncture and pain relief, and homeopathy and headaches.

Post hoc reasoning is one of the most common cognitive biases and one of the more difficult to overcome because the personal experience of immediacy seems to intuitively justify the making of a causal connection. After all, when you hit your finger with a hammer or bump your head on a kitchen cabinet door, you know what caused the pain! When you watch somebody else do the same thing, you're not surprised that you don't feel any pain.

As noted above, if one already has a belief about a causal connection between two unrelated things, it is natural that she would confirm her bias by seeing a sequence of events as an example of her belief in action. Contrary to what some people might think, making hasty conclusions about causal connections is not a sign of stupidity or idiocy. It is natural and the norm, which is why it is so difficult to overcome.

It may be true that your engine blew up two days after you lent your car to your brother-in-law, but it isn't necessarily the case that he did anything to the car that caused the engine to fail.

It may be true that you aced your physics test after forgetting to shave, but you would be foolish to think that not shaving had anything to do with your score on the test.

Just because one thing happens after another does not mean that there is any causal connection between them. On the other hand, it is not a coincidence that your car won't start after you filled the gas tank with water. Sometimes when one thing happens after another, it's because the first thing caused the second. What justifies making a causal connection is knowledge. That knowledge can come from experience or from experiments. When a doctor prescribes a medication for a urinary tract infection, she bases her treatment on knowledge. When the patient gets better and thinks the medicine helped in her recovery, she is not committing the post hoc fallacy because her connection between the two is justified. Unfortunately, many people think they have knowledge (about dowsing, vaccines, astrology, etc.) when all they really have is misinformation.

Scientists have developed various ways to test causal claims. For example, many people believe that vaccines cause autism. Yet, study after study has not found what should be found if vaccines cause autism. Vaccinated children should have a significantly higher rate of autism than non-vaccinated children, but they don't. Nor do dowsers find water at a greater than chance rate when tested under controlled conditions.

post hoc parables

Andy's story: For years I suffered a debilitating pain in my neck. I couldn't work and even the slightest activity (like brushing my teeth) was painful. My science-based medical doctor sent me to a psychiatrist who prescribed pills. They didn't do any good. I went to an acupuncturist and got some relief but it didn't last. A friend recommended the alkaline diet. At first I thought this was the answer, but again it didn't last. Another friend thought her prayer group could cure me. I went to several sessions and had hands laid all over me but to no avail. I tried aromatherapy, dolphin therapy, and therapeutic touch. Still, I suffered. I finally got relief after six years from a chiropractor. You are an idiot for criticizing chiropractors. Chiropractic was the only thing that relieved me of my pain. I am now able to work and brush my teeth with minimal pain.

Betty's story: For years I suffered a debilitating pain in my neck. I couldn't work and even the slightest activity (like brushing my teeth) was painful. My science-based medical doctor sent me to a psychiatrist who prescribed pills. They didn't do any good. I went to a chiropractor and got some relief but it didn't last. A friend recommended the alkaline diet. At first I thought this was the answer, but again it didn't last. Another friend thought her prayer group could cure me. I went to several sessions and had hands laid all over me but to no avail. I tried aromatherapy, dolphin therapy, and therapeutic touch. Still, I suffered. I finally got relief after six years from an acupuncturist. You are an idiot for criticizing acupuncture. Acupuncture was the only thing that relieved me of my pain. I am now able to work and brush my teeth with minimal pain.

Chuck's story: For years I suffered a debilitating pain in my neck. I couldn't work and even the slightest activity (like brushing my teeth) was painful. My science-based medical doctor sent me to a psychiatrist who prescribed pills. They didn't do any good. I went to a chiropractor and got some relief but it didn't last. A friend recommended the alkaline diet. At first I thought this was the answer, but again it didn't last. Another friend thought her prayer group could cure me. I went to several sessions and had hands laid all over me but to no avail. I tried acupuncture, dolphin therapy, and therapeutic touch. Still, I suffered. I finally got relief after six years from an aromatherapist. You are an idiot for criticizing aromatherapy. Aromatherapy was the only thing that relieved me of my pain. I am now able to work and brush my teeth with minimal pain.

Monday, June 4, 2012

communal reinforcement

"We know that people can maintain an unshakable faith in any proposition, however absurd, when they are sustained by a community of like-minded believers."--Daniel Kahneman

Communal reinforcement is the process by which a claim becomes a strong belief through repeated assertion by members of a community. The process is independent of whether the claim has been properly researched or is supported by empirical data significant enough to warrant belief by reasonable people. Often, the mass media contribute to the process by uncritically supporting the claims. More often, however, the mass media provide tacit support for untested and unsupported claims by saying nothing skeptical about even the most outlandish of claims, such as that a ballroom dance instructor or a telephone operator hears clips from another dimension that are messages from ghosts.

Communal reinforcement explains how entire nations can pass on ineffable gibberish (aka religious claims about virgin births, godmen, miracles, and the like) from generation to generation. It also explains how testimonials reinforced by other testimonials within the community of therapists, psychologists, theologians, politicians, talk show hosts, etc., can supplant and be more powerful than scientific studies or accurate gathering of data by disinterested parties. When communal reinforcement joins forces with the tendency to defer to authority, the result can be deadly. Recall the history of quack "cures" or "harmless" paints laced with radioactive material that were popular in the early part of the last century. Recall also the history of the belief in and treatment of witches, as well as the belief in demonic possession and exorcism.

Communal reinforcement explains, in part, why about half of all American adults deny evolution occurred and believe that Abraham's god created the universe in six days,* that he made the first man and woman out of clay, and that a snake talked the woman into disobeying an order from Abraham's god thereby causing all our problems. Every cult leader knows the value of communal reinforcement combined with isolating cult members from contrary ideas.

If you find yourself continually praising people who agree with you and who are more articulate than you are at expressing your hatreds and criticizing and ridiculing those who disagree with you, you may be addicted to communal reinforcement, which can be a mood enhancer for people who are too lazy or brainwashed to think for themselves.