tag:blogger.com,1999:blog-87785065457807067932024-03-29T00:33:48.772-07:00Unnatural Acts that can improve your thinkingA follow-up to the book "Unnatural Acts: Critical Thinking, Skepticism and Science Exposed!" by Robert Todd Carroll, creator of The Skeptic's Dictionary. The blog will offer irregular postings about cognitive biases, logical fallacies, and illusions.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.comBlogger62125tag:blogger.com,1999:blog-8778506545780706793.post-57993653281417673032013-11-15T06:33:00.001-08:002013-11-26T18:13:02.220-08:00The Critical Thinker's Dictionary<i>The Critical Thinker's Dictionary: Biases, Fallacies, and Illusions and what you can do about them</i> is now available from Amazon, Kobo, and Barnes & Noble as an e-book and from Lulu as a paperback. Click <a href="http://www.skepdic.com/news/newsletter1211.html" target="_blank">here</a> for more information about ordering.<i style="mso-bidi-font-style: normal;"><span style="mso-fareast-font-family: Calibri;"> </span></i><br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://skepdic.com/graphics/ctdcovericon.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://skepdic.com/graphics/ctdcovericon.jpg" width="126" /></a></div>
<br />
<br />
<i style="mso-bidi-font-style: normal;"><span style="mso-fareast-font-family: Calibri;">The
Critical Thinker’s Dictionary</span></i><span style="mso-fareast-font-family: Calibri;"> grew out of a suggestion made by Harriet Hall, M.D., in a review of
my book <i style="mso-bidi-font-style: normal;">Unnatural Acts: Critical
Thinking, Skepticism, and Science Exposed!</i> </span><span style="mso-fareast-font-family: Calibri;"><i>Unnatural Acts</i>. That book concluded with a chapter that advised the reader to study 59 cognitive biases, fallacies, and illusions that I briefly described. This blog was set up with the goal of posting expansions on those descriptions. So, every Monday for 59 weeks I tackled one of the biases, illusions, or fallacies and posted them here. Those posts have been rewritten and a few more topics have been added to produce </span><span style="mso-fareast-font-family: Calibri;"><i style="mso-bidi-font-style: normal;">The
Critical Thinker’s Dictionary</i>. </span><br />
<div class="MsoNormal" style="margin-bottom: 10.0pt; text-indent: 0in;">
<span style="mso-fareast-font-family: Calibri;"></span></div>
<div class="MsoNormal" style="margin-bottom: 10.0pt; text-indent: 0in;">
<span style="mso-fareast-font-family: Calibri;">A guiding principle of <i>Unnatural Acts </i>and <i>The Critical Thinker's Dictionary </i> is that critical thinking does not come naturally. Not only must
we work at becoming critical thinkers, doing so goes against our nature.
Evolution has provided our species with a magnificent brain, capable of
extraordinary things like self-consciousness, memory, facial recognition, and
thousands of other “miracles.” But we evolved to think quickly, a necessity in
the environments our species found itself during most of its 100,000-year
history. There are times in our modern world where quick thinking is needed,
but there are also many times when we should slow things down. Sometimes we are
better off if, instead of relying on our instinctive, natural way of thinking
about things, we take some time to do some research, to reflect, and to discuss
before making a judgment. </span></div>
<div class="MsoNormal" style="margin-bottom: 10.0pt; text-indent: 0in;">
<i style="mso-bidi-font-style: normal;"><span style="mso-fareast-font-family: Calibri;"></span></i><span style="mso-fareast-font-family: Calibri;"></span><span style="font-family: "Arial","sans-serif"; font-size: 12.0pt; mso-ansi-language: EN-US; mso-bidi-font-size: 11.0pt; mso-bidi-language: AR-SA; mso-fareast-font-family: Calibri; mso-fareast-language: EN-US;"></span></div>
<div align="left" class="MsoNormal" style="margin-bottom: 10.0pt; text-align: left; text-indent: 0in;">
<span style="mso-fareast-font-family: Calibri;"> </span>'Know Thyself' advised the ancient Greek sages at a time when
philosophers defined us as rational animals. Rationality was thought of
as an ideal largely achievable by controlling the emotions and avoiding
logical fallacies. Today, we know better. Biology and neuroscience have
exposed the brain as a great deceiver. Unconscious biases drive us to
believe and do things that the conscious mind explains in self-serving
stories, making us appear more rational to ourselves than we really are.
Modern science has taught us that rationality involves much more than
just controlling the emotions and avoiding fallacies. Today’s rational
animal—what we call the critical thinker—must understand the unconscious
biases that are directing many of our most important judgments and
decisions. <i>The Critical Thinker’s Dictionary</i> explores the insights of
ancient and modern philosophers along with the latest findings in such
fields as neuroscience and behavioral economics to lay out the many
obstacles and snares that await anyone committed to a rational life. <i>The
Critical Thinker’s Dictionary</i> isn’t a collection of dry definitions,
but a colorful, three-dimensional portrait of the major obstacles to
critical thinking and what we can do to overcome them.</div>
Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com103tag:blogger.com,1999:blog-8778506545780706793.post-67955071902452174912013-02-04T06:00:00.000-08:002013-02-04T19:07:21.226-08:00the wisdom of not thinking too muchThis will be last blog post for <i>Unnatural Acts that can improve your thinking</i>. Instead of introducing another cognitive bias or logical fallacy, this final post will be devoted to considering when wisdom requires that we stop thinking altogether or that we stop gathering data to reflect on.<br />
<a name='more'></a><br />
<br />
We know that we have evolved to make quick decisions and that following our instincts has served the species pretty well, at least in terms of survival. The other entries in this blog have focused on the cognitive short cuts and logical fallacies that often accompany thinking that comes naturally. The focus has been on the importance of reflective thinking for making good judgments and coming to decisions that we won't regret. But there are times when a person will do better to stop thinking, stop reflecting, and to simply act. Not everyone arrives at this stage where the wise thing to do is to put critical thinking aside. Those who do have spent many years gaining knowledge, expertise, or performing ability. Their training, practice, and the skillful development of their talents have eliminated the need for reflection in order to do the right thing or make the right call. When it comes time to sing that aria before an adoring audience or swing at a 98 mph fastball in front of 50,000 baseball fanatics, thinking about what you are doing will hinder rather than help you succeed. When you have analyzed a problem to death in chemistry or physics, sometimes the best thing to do is to stop thinking about the problem and divert your attention to something else. There is no guarantee, but sometimes unconscious processes will provide you with the solution out of the blue. When an unexpected situation arises for which none of your years of training or experience has prepared you, following your instincts may be your best policy. All of these situations presuppose that you are extremely knowledgeable, have many years of experience, or have reached a performance level recognized as the highest level in your field. <a href="http://en.wikipedia.org/wiki/Herbert_Simon">Herbert Simon</a>, Nobel Prize winner in economics, put it this way: for the true expert, "intuition is nothing more than recognition." For the true expert, the situation provides cues and the cues give "the expert access to information stored in memory, and the information provides the answer" (quoted in Daniel Kahneman, <a href="http://www.amazon.com/exec/obidos/ISBN=0374275637/roberttoddcarrolA/"><i>Thinking, Fast and Slow</i></a>, p. 11).<br />
<br />
Experts in fields where reliable predictions occur with some regularity--such as physics, math, and chemistry--should be looked at differently than experts who make predictions in low-validity fields where long-term predictions are just guesswork because of the complexity of the system they are trying to master. Political and economic experts, for example, actually do worse than dart-throwing monkeys when it comes to making long-term predictions. (See <a href="http://en.wikipedia.org/wiki/Philip_Tetlock">Philip Tetlock</a>, <a href="http://www.amazon.com/exec/obidos/ISBN=0691128715/roberttoddcarrolA/"><i>Expert Political Judgment: How Good Is It? How Can We Know?</i></a> [2005]. Tetlock is a psychologist at the University of Pennsylvania who studied expert predictions over a twenty-year period.) The intuition of such experts is about as reliable as the intuition of the "average citizen" when asked to make long-term predictions about politics or the economy. It should go without saying that having high subjective confidence in one's knowledge or intuition is not a good sign of being accurate or wise.<br />
<br />
People who are ignorant and have no experience and little talent but who follow their instincts are as likely to make bad decisions as stumble upon a good decision. But people who have vast amounts of knowledge, experience, or performing history should do little or no thinking while acting and should trust their instincts when working in their field of expertise. Outside their fields of expertise, of course, experts and talented artists are as vulnerable to the snares and lures of uncritical thinking as the rest of us.<br />
<br />
There are also times when each of us should stop gathering more information to help us make a decision or judgment. Information overload can hinder our ability to make good judgments at times. Often we are better off making a decision by considering only a few obviously
important factors rather than by introducing as many pertinent items as we can come up with. The more variables we bring into play, the greater our chances of giving more weight to minor items and less weight to important items. This point was made clear by Daniel Kahneman and Amos Tversky in experiments that showed giving people more information about a subject led them to poorer decisions. One example has become a classic. Subjects are told that Linda is "thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations." Then they are asked which of several statements they thought would be true of Linda. In test after test, subjects thought it more likely that Linda was a feminist bank teller than that she was a bank teller. There is a fundamental logical error here, which Kahneman and Tversy called the <i>conjunction fallacy</i>. (A conjunction is the joining of two statements with words like 'and' or 'but'.) It should be obvious that there is a greater probability of a single conjunct being true than there is of both conjuncts being true (Daniel Kahneman, <a href="http://www.amazon.com/exec/obidos/ISBN=0374275637/roberttoddcarrolA/"><i>Thinking, Fast and Slow</i></a>, p. 156). These conjunction error studies have been replicated by Christopher Hsee and John List with different scenarios presented to test subjects but with identical results to Kahneman and Tversky.<br />
<br />
Gathering more and more information can give one the <a href="http://59ways.blogspot.com/search/label/illusion%20of%20understanding">illusion of understanding</a>. American psychologist and philosopher <a href="http://en.wikipedia.org/wiki/Paul_Meehl">Paul Meehl</a> compared the predictions of trained counselors versus a simple algorithm that used just two or three variables and found that the simple programs were significantly more accurate in their predictions than the more complex programs of the experts. A typical test might involve trying to predict the grade point average for various freshmen at the end of the school year. A simple formula that looked only at high school GPA and the results of one standardized college entrance test were compared with the predictions of counselors who had interviewed each student for 45 minutes and also had access to the results of several standardized tests and a four-page personal statement from each student. In that study, the simple algorithm outperformed 79 percent of the experts. American economist <a href="http://en.wikipedia.org/wiki/Orley_Ashenfelter">Orley Ashenfelter</a> did a similar experiment involving predicting prices for fine Bordeaux wines. He pitted the experts against a simple formula that considered only weather, average temperature over the summer growing season, the amount of rain at harvest-time, and the total rainfall during the previous winter. Ashenfelter's formula outperformed the world-renowned experts. (Ashenfelter's work is discussed in Daniel Kahneman, <a href="http://www.amazon.com/exec/obidos/ISBN=0374275637/roberttoddcarrolA/"><i>Thinking, Fast and Slow</i></a>, p. 224ff.)<br />
<br />
In matters of personal taste, the less information the better. Just drink the wine, taste the jam, let your feelings tell you which print you prefer. Don't be influenced by how much the wine costs. Don't get hung up on the various qualities one might list to distinguish different jams. Don't get too many details about the various prints you have to choose from. If the one you like is affordable to you, buy it no matter what your friends or the critics say.<br />
<br />
In decisions that are more or less trivial in the big picture--this would include everything from buying a new pen to deciding where to go on vacation or what new couch to buy--the less information the better. We've all heard the expression "paralysis by analysis." When a decision is a minor one, the wisest path is often to focus on two or three important points, rather than drum up a list of every pro and con you can think of and then apply your list to dozens of possible choices.<br />
<br />
In decisions that are monumental, such as the decision to send troops to fight in a foreign country or to take a loved one off life support, one should get as much information as possible from trustworthy sources who aren't likely to be biased. In such cases, we should consult with both those who are likely to think in ways we are likely to agree with and with those who are likely to disagree with us. Important decisions require diversity of input. In the end, the evidence may seem to weigh equally for going to war and not going to war or for taking a loved off life support and keeping a loved one on life support. You may have no choice but to rely on your gut feeling at that point. (Cf. William James's "<a href="http://www.gutenberg.org/files/26659/26659-h/26659-h.htm">The Will to Believe</a>"). The only other alternative I can see is to take a vote among one's advisers or family members (or whatever group is relevant to the decision-making process) and go with whatever the majority thinks.<br />
<br />
So, while wisdom requires devotion to critical thinking, it also requires knowing when to turn off critical thinking and rely on intuition, gut feeling, instinct, or whatever you choose to call that non-reflective preference percolating in our ever-fascinating brains.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com304tag:blogger.com,1999:blog-8778506545780706793.post-32692481128427331332013-01-28T06:00:00.000-08:002013-01-29T08:03:47.010-08:00change blindnessChange blindness is the failure to detect non-trivial changes in the visual field. The failure to see things changing right before your eyes may seem like a design fault, but it is actually a sign of evolutionary efficiency.<br />
<br />
Examples may be seen by clicking <a href="http://www.youtube.com/watch?v=Qb-gT6vDrmU">here</a>,
<a href="http://www.youtube.com/watch?v=voAntzB7EwE&eurl=">here,</a><a href="http://www.youtube.com/watch?v=1nL5ulsWMYc&list=UUoUA-CpKaFCCV2Uz__qNJZw&index=2"> here,</a> <a href="http://www.pbs.org/wgbh/nova/body/change-blindness.html">here</a>, and
<a href="http://nivea.psycho.univ-paris5.fr/Mudsplash/Nature_Supp_Inf/Movies/Movie_List.html">here</a>.<br />
<br />
The term 'change blindness' was introduced by <a href="http://www.psych.ubc.ca/~rensink/index.html">Ronald Rensink</a> in 1997, although research in this area had been going on for many years. Experiments have shown that dramatic changes in the visual field often go unnoticed whether they are brought in gradually, flickered in and out, or abruptly brought in and out at various time intervals. The implication seems to be that the brain requires few details for our visual representations; the brain doesn't store dozens of details to which it can compare changes (Simons and Levin: 1998). The brain is not a video recorder and it is not constantly processing all the sense data available to it but is inattentive to much of that data, at least on a conscious level.<br />
<a name='more'></a><br />
<br />
Change detection in films is notoriously poor when the change occurs during a cut or pan, as demonstrated by the <a href="http://www.youtube.com/watch?v=voAntzB7EwE&eurl=%22">color-changing card trick video </a>and a number of other videos where a different actor appears after a cut, without the change being noticed by most viewers. Some experiments have shown that a person may be talking to someone (behind a counter, for example) who leaves (bends down behind the counter or exits the room) and is replaced by a different person, without the change being noticed. <br />
<br />
Apparently, change blindness is due to the efficient nature of our evolved visual processing system, but it also opens the door to being deceived, much to the delight of magicians and sleight-of-hand con artists.<br />
<br />
<b>sources</b><br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0307459659/roberttoddcarrolA/">Chabris, Christopher and Daniel Simons. 2010. <i>The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us</i>. Crown.</a>See also <a href="http://www.skepdic.com/refuge/theinvisiblegorilla.html">my review</a> of this book.<br />
<br />
<a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1270068">Davis Deborah et. al. 2007. ‘Unconscious Transference’ Can Be an Instance of ‘Change Blindness’</a><br />
<br />
<a href="http://www.psych.ubc.ca/~rensink/publications/download/PsychSci97-RR.pdf">Rensink, Ronald A., J. Kevin O'Regan, and James J. Clark. (1997). To see or not to see: the need for attention to perceive changes in scenes, <i>Psychological Science</i> 8 (5): 368-373.</a><br />
<br />
<a href="http://psych.unl.edu/mdodd/Psy498/simonslevin.pdf">Simons, Daniel J. & Daniel T. Levin. (1998). Failure to detect changes to people during a real-world interaction, <i>Psychonomic Bulletin and Review</i> 5: 644-649.</a><br />
<br />Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com77tag:blogger.com,1999:blog-8778506545780706793.post-28995564130246640632013-01-21T06:00:00.000-08:002013-01-21T17:14:01.358-08:00bias blind spotThe <i>bias blind spot</i> was described by Princeton University psychologist Emily Pronin and her colleagues (2002) as the tendency to perceive cognitive and motivational biases much more in others than in oneself. The bias blind spot is a <i>metabias</i> since it refers to a pattern of inaccurate judgment in reasoning <i>about </i>cognitive biases. <br />
<a name='more'></a><br />
<br />
In one study, Pronin et al. (2002) found that people tend to rate themselves as less subject to biases than others. In another study, “participants … who showed better-than-average bias” were instructed in how biases operate at the unconscious level. Nevertheless, 63% insisted that their self-assessments were accurate and objective. <i>Better-than-average bias</i> is the <a href="http://en.wikipedia.org/wiki/Illusory_superiority">tendency of individuals to rank themselves as better than average on just about anything you ask them</a>.
For example, 74% of all managers think they are better than average at
managing. (As Dilbert’s boss noted: this means that 26% of managers
don’t know that they’re better than average. To which Dilbert replied: <i>you’re all in the top 110%</i>. See <a href="http://www.dilbert.com/2013-01-18/">Scott Adams's cartoon for January 18, 2013</a>.)<br />
<br />
Participants in another study “reported their peer’s self-serving attributions regarding test performance to be biased but their own similarly self-serving attributions to be free of bias.”<br />
<br />
In another study, Pronin and Matthew Kugler (2006) argue that the bias blind spot “involves the value that people place, and believe they should place, on introspective information (relative to behavioral information) when assessing bias in themselves versus others. Participants considered introspective information more than behavioral information for assessing bias in themselves, but not others.” A consequence of the bias blind spot is that people tend to think their own beliefs are accurate and their sources trustworthy, but those who hold different views are biased and their sources are not trustworthy (Ehrlinger et al. 2005). <br />
<br />
Finally, a study by Richard West et al. (2012) found that “being free of the bias blind spot does not help a person avoid the actual classic cognitive biases.” They also found that higher cognitive ability did not correlate with a smaller bias blind spot. (Note: West et al. did <i>not </i>find that “smarter people are more vulnerable to these thinking errors,” as Jonah Lehrer claimed in a <i>New Yorker</i> article [“Why Smart People Are Stupid,” June 12, 2012]. In fact, the opposite is true: Most cognitive biases are negatively correlated with cognitive sophistication, as West et al. note in their article.) <br />
<br />
<b>Sources </b><br />
<br />
Ehrlinger, Joyce, Thomas Gilovich and Lee Ross. 2005. Peering Into the Bias Blind Spot: People’s Assessments of Bias in Themselves and Others. <i>Personality and Social Psychology Bulletin</i>. Vol. 31, No. 5, 680-692.<br />
<br />
<a href="http://cbdr.cmu.edu/seminar/pronin.pdf">Pronin, E. & Matthew B. Kugler. 2006. Valuing thoughts, ignoring behavior: The introspection illusion as a source of the bias blind spot. <i>Journal of Experimental Social Psychology.</i> Vol 43, Issue 4, 565-578. </a><br />
<br />
Pronin, E., Gilovich, T., & Ross, L. 2004. Objectivity in the eye of the beholder: divergent perceptions of bias in self versus others. <i>Psychological Review</i>. 111, 781–799. <br />
<br />
Pronin, E., Lin, D. Y., & Ross, L. 2002. The bias blind spot: Perceptions of bias in self versus others. <i>Personality and Social Psychology Bulletin</i>. 28, 369-381. <br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0805091254/roberttoddcarrolA/">Shermer, Michael. 2011. <i>The Believing Brain: From Ghosts and Gods to Politics and Conspiracies---How We Construct Beliefs and Reinforce Them as Truths</i>. Times Books.</a> <br />
<br />
West, Richard, Russell Meserve and Keith Stanovich. 2012. Cognitive sophistication does not attenuate the bias blind spot. <i>Journal of Personality and Social Psychology</i>. September; 103(3): 506-19. <br />
<br />
Wilson, T. D., Centerbar, D. B., & Brekke, N. 2002. Mental contamination and the debiasing problem. In<a href="http://www.amazon.com/exec/obidos/ISBN=0521796792/roberttoddcarrolA/"> D. Griffin & T. Gilovich (Eds.), <i>Heuristics and biases: The psychology of intuitive judgment</i> </a>(pp. 185–200). New York: Cambridge. Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com124tag:blogger.com,1999:blog-8778506545780706793.post-70502050378073262592013-01-14T08:03:00.003-08:002013-01-15T18:45:10.966-08:00suppressed evidenceA cogent argument presents <i>all</i> the relevant evidence. An argument that omits relevant evidence appears stronger and more cogent than it is.<br />
<br />
The <i>fallacy of suppressed evidence</i> occurs when an arguer intentionally omits relevant data. This is a difficult fallacy to detect because we often have no way of knowing that we haven't been told the whole truth. <br />
<br />
Many advertisements commit this fallacy. Ads inform us of a product's dangers only if required to do so by law. Ads never state that a competitor's product is equally good. The <a href="http://www.publicintegrity.org/2012/07/08/9293/black-lung-surges-back-coal-country">coal</a><sup> [</sup><a href="http://www.courier-journal.com/cjextra/dust/frame_cheat.html">*</a><sup>]</sup>, <a href="http://skepdic.com/essays/amazinggrace.html">asbestos</a><sup> [</sup><a href="http://www.mesothelioma.co/asbestos/asbestos-industry-cover-up/altered-medical-research.aspx">*</a><sup>]</sup>, <a href="http://nuclearhistory.wordpress.com/2012/11/12/evidentiary-sources-which-demonstrate-the-suppression-of-medical-records-by-nuclear-authorities-from-the-dawn-of-the-nuclear-age-until-the-current-era/">nuclear</a><sup> [</sup><a href="http://www.spacemart.com/reports/Nobel_Laureate_may_have_suppressed_evidence_on_radiation_effects_in_1946_999.html">*</a><sup>]</sup>, and <a href="http://www.nytimes.com/1994/04/29/us/scientists-say-cigarette-company-suppressed-findings-on-nicotine.html?pagewanted=all&src=pm">tobacco</a> <sup>[</sup><a href="http://www.encognitive.com/node/1677">*</a><sup>]</sup> industries have knowingly suppressed evidence regarding the health of their employees or the health hazards of their industries and products.<br />
<a name='more'></a><br />
<br />
Occasionally scientists will suppress evidence, making a study seem more significant than it is. In the December 1998 issue of <i>The Western Journal of Medicine</i> scientists Fred Sicher, Elisabeth Targ, Dan Moore II, and Helene S. Smith published "A Randomized Double-Blind Study of the Effect of Distant Healing [DH] in a Population With Advanced AIDS--Report of a Small Scale Study." (See my article on the <a href="http://skepdic.com/sichertarg.html">Sicher-Targ distance healing report</a> for more details.) The authors do not mention, <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1305403/pdf/westjmed00327-0028.pdf">nor has <i>The Western Journal of Medicine</i> ever acknowledged, that the study was originally designed and funded to determine one specific effect: <i>death</i>. </a>The 1998 study was designed to be a follow-up to a 1995 study of 20 patients with AIDS, ten of whom were prayed for by psychic healers. Four of the patients died, a result consistent with chance, but all four were in the control group, a stat that appeared anomalous enough to these scientists to do further study. I don't know whether evidence was suppressed or whether the scientists doing the study were simply incompetent, but the four patients who died were the four oldest in the study. The 1995 study did not control for age when it assigned the patients to either the control or the healing prayer group. Any controlled study on mortality that does not control for age is by definition not a properly designed study.<br />
<br />
The follow-up study, however, did suppress evidence, yet it is "widely acknowledged as the most scientifically rigorous attempt ever to discover if prayer can heal" (Bronson, <a href="http://www.wired.com/wired/archive/10.12/prayer.html?pg=1">"A
Prayer Before Dying</a>," 2002). The standard format for scientific reports is to begin with an abstract that summarizes the contents of the report. The abstract for the Sicher report notes that controls were done for age, number of AIDS-defining illnesses, and cell count. Patients were randomly assigned to the control or healing prayer groups. The study followed the patients for six months. "At 6 months, a blind medical chart review found that treatment subjects acquired significantly fewer new AIDS-defining illnesses (0.1 versus 0.6 per patient, P = 0.04), had lower illness severity (severity score 0.8 versus 2.65, P = 0.03), and required significantly fewer doctor visits (9.2 versus 13.0, P = 0.01), fewer hospitalizations (0.15 versus 0.6, P = 0.04), and fewer days of hospitalization (0.5 versus 3.4, P = 0.04)." These numbers are very impressive. They indicate that the measured differences were not likely due to chance. Whether they were due to healing prayer (HP) is another matter, but the scientists concluded their abstract with the claim: "These data support the possibility of a DH effect in AIDS and suggest the value of further research." Two years later the team, led by Elisabeth Targ, was granted $1.5 million of our tax dollars from the <a href="http://www.skepdic.com/NCCAM.html">National Institutes of Health Center for Complementary Medicine</a> to do further research on the healing effects of prayer.<br />
<br />
What the Sicher study didn't reveal was that the original study had not been designed to do any of these measurements they report as significant. Of course, any researcher who didn't report significant findings just because the original study hadn't set out to investigate them would be remiss. The standard format of a scientific report allows such findings to be noted in the abstract or in the Discussion section of the report. It would have been appropriate for the Sicher report to have noted in the Discussion section that since only one patient died during their study, it appears that the new drugs being given AIDS patients as part of their standard therapy (triple-drug anti-retroviral therapy) were having a significant effect on longevity. They might even have suggested that their finding warranted further research into the effectiveness of the new drug therapy. However, the Sicher report abstract doesn't even mention that only one of their subjects died during the study, indicating that they didn't recognize a truly significant research finding. It may also indicate that <i>the scientists didn't want to call attention to the fact that their original study was designed to study the effect of healing prayer on the mortality rate of AIDS patients</i>. Since only one patient died, perhaps they felt that they had nothing to report.<br />
<br />
It was only after they mined the data once the study was completed that they came up with the suggestive and impressive statistics that they present in their published report. The <a href="http://skepdic.com/texas.html">Texas sharpshooter fallacy</a> seems to have been committed here. Under certain conditions, mining the data would be perfectly acceptable. For example, if your original study was designed to study the effectiveness of a drug on blood pressure but you find after the data is in that the experimental group had no significant decrease in blood pressure but did have a significant increase in HDL (the "good" cholesterol), you would be remiss not to mention this. You would be guilty of deception, however, if you wrote your paper as if your original design was to study the effects of the drug on cholesterol and made no mention of blood pressure. <br />
<br />
It would have been entirely appropriate for the Sicher group to have noted in the Discussion section of their report that they had discovered something interesting in their statistics: <i>Hospital stays and doctor visits were lower for the HP group</i>. It was inappropriate to write the report as if that was one of the effects the study was designed to measure when this effect was neither looked for nor discovered until Moore, the statistician for the study, began crunching numbers looking for something of statistical significance after the study was completed. That was the most significant stat he could come up with. Again, crunching numbers and data mining after a study is completed is appropriate; not mentioning that you rewrote your paper to make it look like it had been designed to crunch those numbers isn't.<br />
<br />
It would have been appropriate in the Discussion section of their report to have speculated as to the reason for the statistically significant differences in hospitalizations and days of hospitalization. They could have speculated that prayer made all the difference and, if they were competent, they would have also noted that insurance coverage could have made all the difference as well. "Patients with health insurance tend to stay in hospitals longer than uninsured ones" (Bronson 2002). The researchers should have checked this out and reported their findings. Instead, they then took a list of 23 illnesses associated with AIDS and had Sicher go back over each of the forty patient medical charts and use them to collect the data for the 23 illnesses as best he could. This was after it was known to Sicher which group each patient had been randomly assigned to, prayer or control. The fact that the names were blacked out, so he could not immediately tell whose record he was reading, does not seem sufficient to justify allowing him to review the data. There were only 40 patients in the study and he was familiar with each of them. It would have been better had an independent party, someone not involved in the study, gone over the medical charts. Sicher is "an ardent believer in distant healing" and he had put up $7,500 for the pilot study on prayer and mortality. His impartiality was clearly compromised. So was the double-blind quality of the study.<br />
<br />
Thus, there was quite a bit of significant and relevant evidence suppressed in the Sicher study that, had it been revealed, might have diminished its reputation as the best designed study ever on prayer and healing. Instead of being held up as a model of promising research in the field of spiritual science, this study might have ended up in the trash heap where it belongs.<br />
<br />
<div style="text-align: center;">
<b>another example of suppressed evidence</b></div>
<br />
In an effort to encourage reporters to be more critical of President Barack <a href="http://en.wikipedia.org/wiki/American_Recovery_and_Reinvestment_Act_of_2009"> Obama's economic stimulus package</a>, Don Stewart, a spokesman for Senate Republican leader Mitch McConnell of Kentucky, encouraged reporters to determine the wastefulness of the package by getting "out your calculators" and divide the amount of money being spent by the number of jobs created or saved. Doing so produces the ridiculous figure of nearly a quarter of a million dollars per job. (The White House had estimated that $160 billion in stimulus money was spent and that 650,000 jobs were created or preserved.) Fortunately, many reporters didn't take the bait.<br />
<br />
<a href="http://newsbusters.org/blogs/tom-blumer/2009/04/29/aps-calvin-woodward-puts-out-astonishing-fact-check-obama"> Calvin Woodward of the Associated Press</a>, for example, responded by writing an article on some of the things that Stewart was not considering. When you consider <i>all </i>the relevant evidence, the notion that Obama is spending about $250,000 per job can be seen for the distortion that it is. Woodward notes:<br />
<blockquote class="tr_bq">
The calculations ignore the fact that the money doesn't go directly to each job holder, but also goes toward material and supplies as well. <br />
<br />
The contracts being made will fuel work for months or years. Jobs begun with stimulus money will probably stimulate more jobs in the future, e.g., a construction project may only require a few engineers to get going, but the work force may swell "as ground is broken and building accelerates." <br />
<br />
The stimulus package approved by Congress includes money for "research, training, plant equipment, extended unemployment benefits, credit assistance for businesses and more."</blockquote>
Editors at the
<a href="http://washingtonexaminer.com/article/43317#.UPJA6WfNkYA"> <i> The Washington Examiner</i></a>, however, didn't do any critical thinking and wrote:<br />
<blockquote class="tr_bq">
Even if we take at face value the White House claim that it created or saved all these jobs with approximately $150 billion of the economic stimulus money, a little simple math shows the taxpayers aren’t getting any bargains here: $150 billion divided by 650,000 jobs equals $230,000 per job saved or created. Instead of taking all that time required to write the 1,588-page stimulus bill, Congress could have passed a one-pager saying the first 650,000 jobless persons to report for work at the White House will receive a voucher worth $230,000 redeemable at the university, community college or trade school of their choice. That would have been enough for a degree plus a hefty down payment on a mortgage.</blockquote>
<a href="http://mediamatters.org/blog/2011/08/22/dead-horse-wash-examiner-falsely-claims-stimulu/181756">MediaMatters for America took the <i>Examiner</i></a> to task for their "misleading cost-per-job stimulus math." The simplistic math doesn't capture the complexity of the effects of the stimulus package.<br />
<br />
Finally, it is an unfortunate fact that some prosecutors subvert the criminal justice system by not disclosing exculpatory evidence. As a result, many languish in prison unlawfully and can only hope that the suppressed evidence will be exposed and their <a href="http://digitalcommons.law.uga.edu/cgi/viewcontent.cgi?article=1371&context=fac_artchop">convictions overturned by courts</a>. Police fabrication often goes hand in hand with suppressed evidence in such cases, e.g., the case of the <a href="http://en.wikipedia.org/wiki/Birmingham_Six">Birmingham Six</a> and the case of the <a href="http://en.wikipedia.org/wiki/Central_Park_Jogger_case">Central Park Five</a>.<br />
<br />
<div style="text-align: center;">
<b>false charge of suppressed evidence</b></div>
<br />
Cranks often make a false charge of suppressed evidence to support claims that their alternative view of science, history, or current affairs is justified. For example, Michael Cremo and Richard Thompson in <i>Forbidden Archeology </i>(1993) claim that scientists have been suppressing evidence of all kinds rather than give up the standard model for the age of the human species.<br />
<blockquote class="tr_bq">
The evidence includes a nail found in Devonian sandstone, metallic tubes found in Cretaceous chalk, a gold thread found in Carboniferous stone, a small Carboniferous gold chain found in a lump of coal, a Carboniferous iron cup from a chunk of coal, a Cambrian ‘shoe print’, a metallic vase from Precambrian rock, and Precambrian grooved metallic spheres from South Africa.<a href="http://www.ancient-hebrew.org/ancientman/1009.html">*</a></blockquote>
Unfortunately, none of this evidence exists because it's been suppressed, according to the authors, and we have to rely on reports of its existence from a variety of sources who are long dead.<br />
<br />
The <a href="http://www.checktheevidence.co.uk/cms/index.php?option=com_content&task=view&id=182&Itemid=60">oil industry has long been accused of suppressing evidence</a> regarding various free energy devices and technology. (The link here mentions a <i>60 Minutes</i> piece on cold fusion in its defense. Please see <a href="http://skepdic.com/skeptimedia/skeptimedia42.html">my review</a> of Scott Pelley's pampering of Martin Fleishmann.) Wikipedia has a separate entry for the conspiracy theory "<a href="http://en.wikipedia.org/wiki/Free_energy_suppression">free energy suppression</a>.")<br />
<br />
NASA and the U.S. government have been falsely accused of suppressing evidence regarding <a href="http://www.skepdic.com/roswell.html">alien visitations</a>, the <a href="http://www.skepdic.com/apollo.html">Apollo moon landing</a>, and who knows what else. Conspiracy theorists frequently accuse the U.S. government and Big Pharma of suppressing evidence on a number of things including the <a href="http://www.skepdic.com/911conspiracy.html">conspiracy behind 9/11</a> and the <a href="http://www.skepdic.com/antivaccination.html">use of vaccinations to harm us</a>. This should not surprise us since both <a href="http://www.prisonplanet.com/us-governments-human-experimentation-apology-theyre-only-sorry-they-were-caught.html">government</a> <sup>[<a href="http://www.greenewave.com/forget-osama-bin-laden-remember-pat-tillman-its-all-lies/">1</a>, <a href="http://www.commondreams.org/views03/0905-09.htm">2</a>]</sup> and <a href="http://www.amazon.com/exec/obidos/ISBN=0865478007/roberttoddcarrolA/">Big Pharma</a> have suppressed evidence many times in the past. The government and Big Pharma won't tell us that <a href="http://www.newsmaxhealth.com/headline_health/flu_shot_Alzheimers/2011/12/18/423456.html">the flu shot promotes Alzheimer's</a>, according to conspiracist <a href="http://www.skepdic.com/blaylock.html">Russell Blaylock, M.D.</a> Former dentist <a href="http://www.skepdic.com/horowitz.html">Leonard Horowitz</a> warns us that the evidence has been suppressed that proves that the AIDS and Ebola epidemics were intentionally caused by the U.S. government and that the H1N1 vaccine causes sterility<span style="font-size: small;">.</span> The master of self-serving nonsense, <a href="http://www.skepdic.com/trudeau.html">Kevin </a><a href="http://www.skepdic.com/trudeau.html">Trudea</a><a href="http://www.skepdic.com/trudeau.html">u</a>, has been telling the world for years that "they" have been suppressing evidence for "natural" cures, good diets, and how to get out of debt.<br />
<br />
To prove that evidence has been suppressed one must do more than provide suggestions, implications, and claims from others who can't be cross-examined and who can't produce the evidence that relevant data has been suppressed. One must produce the evidence that relevant evidence has been suppressed. That has not been done by one of the weirdest conspiracy theory/alternative medicine cranks I've ever come across: <a href="http://www.quantummansite.com/catalog/">QuantumMAN</a>, which claims to be the "world's first downloadable medicine." These characters, who go by names such as <a href="http://ces13.mapyourshow.com/5_0/exhibitor_details.cfm?exhid=T0011249">J S Van Cleave, Michael H. Uehara, and Nicholas Brandon Zynda</a>, also call their operation <a href="http://extraterrestrialtechnology.net/">Extraterrestrial Technology</a>. They recently appeared at the <a href="http://tinyurl.com/by2xprh">International Consumer Electronics Show in Las Vegas</a>, touting their quantum computing based on extraterrestrial technology that allows medicine to be digitized and downloaded to your cell phone and magically teleported to your body. The world of science-based medicine has been duping us for centuries with its chemical-based approach.<br />
<br />
These folks use the slogan "Treat disease with data not drugs." The claim, contrary to all known science, that quantum physics allows them to use a special quantum computer to directly transfer data to your phone that then somehow is magically digitized and directly uploaded to your body exactly where it's needed. Never mind that such geniuses should be able to eliminate the physical
device (smart phone, computer. or tablet) as an intermediary. Anyway, they claim that <a href="http://www.quantummansite.com/catalog/detox.php">chemical-based treatment systems are not compatible with human physiology.</a> How do they know this?<span style="font-size: small;"> "<span style="font-size: small;">T</span></span>he universe, including the human body and conditions that afflict it, all operates [sic] according to the principles of quantum physics. Chemical based treatment systems do not operate according to those principles and, as such, are not compatible with the human host." (Nice contradiction, don't you think? On the one hand <i>everything</i> operates according to the principles of quantum physics but chemical treatments <i>don't</i> operate according to the principles of quantum physics.) If these claims were true--which they are not--it would mean that there has been a vast conspiracy involving the entire scientific community for several centuries to suppress not only scientific evidence but logical principles as well.<br />
<br />
These quantum/extraterrestrial folks claim they have a humanitarian research group called <a href="http://www.quantummansite.com/catalog/zag.php">ZAG</a> (Zürich Alpine Group): <br />
<blockquote class="tr_bq">
ZAG understands that quantum problems require a quantum solution and has found a way to transfer bioinformation from its quantum computer via quantum teleportation to the brain, [and?] also [has made?] a quantum computer, [sic] to reprogram the brain to effect positive medical changes within the body and mind. These technological advancements have thus given birth to the world's first downloadable medicines. </blockquote>
They've kept their work quiet because Big Pharma is lurking in the wings ready to steal their secrets. Anyway, none of these downloadable "medicines" are free (including the one that supposedly gives protection against <a href="http://www.quantummansite.com/catalog/malariasafe.php">malaria</a>) and I would bet that the claim that all profit is going to charity is as true as the rest of the claims on their websites.<br />
<br />
If you're still paying attention, you should know that your pet can be protected, too:<br />
<blockquote class="tr_bq">
Use
your cell phone (your pc, laptop or tablet) to instantly diagnose and
medically treat your pet at home with guaranteed results with a radical
new technology of extraterrestrial origin. Using pure data, QuantumVET
Tricorder Plus treats by programming the brain of the species with
biodirectives.</blockquote>
It should be noted that what the quantum/extraterrestrial folks claim about effecting physiological changes by a quantum computer via quantum teleporting is false because the mass of biological molecules and the speed with which they communicate via ion channels and mechanically are several orders of magnitude too large for quantum effects to matter. See <a href="http://www.amazon.com/exec/obidos/ISBN=1573920223/roberttoddcarrolA/"><i>The Unconscious Quantum</i> by Victor Stenger</a> and <a href="http://www.amazon.com/exec/obidos/ISBN=0393078035/roberttoddcarrolA/"><i>The Spark of Life</i> by Frances Ashford.</a>Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com48tag:blogger.com,1999:blog-8778506545780706793.post-63978489606698807922013-01-07T08:38:00.000-08:002013-01-07T15:44:39.240-08:00anecdotal evidence (testimonials)Testimonials and anecdotes are used to support claims in many fields. Advertisers often rely on testimonials to persuade consumers of the effectiveness or value of their products or services. Others use anecdotes to drive home the horror of some alleged activity or the danger of widely-used electronic devices like cell phones. In the mid-90s, there were many people, some in law enforcement, claiming that <a href="http://www.skepdic.com/satanrit.html">Satanists were abducting and abusing children</a> on a massive scale. The anecdotes involved vivid descriptions of horrible sexual abuse, even murder of innocent children. The anecdotes were quite convincing, especially when they were repeated on nationally televised programs with popular hosts like <a href="http://www.religioustolerance.org/geraldo.htm">Geraldo Rivera</a>. A four-year study in the early 1990s found the allegations of satanic ritual abuse to be without merit. Researchers investigated more than 12,000 accusations and surveyed more
than 11,000 psychiatric, social service, and law enforcement personnel.
The researchers could find <a href="http://skepdic.com/satanrit.html">no unequivocal evidence for a single case of satanic cult ritual abuse.</a><br />
<a name='more'></a><a href="http://skepdic.com/satanrit.html"> </a><br />
<br />
There have also been scares fueled by anecdotes regarding such disparate items as silicone breast implants, cell phones, and vaccinations. In the 1990s many women blamed their cancers and other diseases on breast implants. Talk show hosts like <a href="http://skepdic.com/refuge/funk2.html#implants">Oprah Winfrey and Jenny Jones</a>
presented groups of women who were suffering from cancer or some
other serious disease and who had been diagnosed after they'd had breast
implants. The stories tugged at the heartstrings and brought tears to many sensitive eyes, but the scientific evidence did not exist that there was a causal connection between the implants and any disease. <span style="font-family: Arial;">That fact did not prevent lawyers from extorting $4.25 billion from implant
manufacturers. </span>Marcia Angell, former executive
editor of the <i>New England Journal of Medicine</i>, <a href="http://www.skepdic.com/refuge/funk36.html#bust"> brought the wrath of feminist hell upon herself in 1992</a>
when she wrote an editorial challenging the Food and Drug
Administration's decision to ban the manufacture of silicone breast
implants. The scientific evidence wasn't there to justify the ban. She
eventually wrote a book describing the fiasco: <a href="http://www.amazon.com/exec/obidos/ISBN=0393316726/roberttoddcarrolA/"> <i>Science on Trial: The Clash of Medical Evidence and the Law in the Breast Implant Case</i>.</a> The scientific evidence is now in. The implants don't cause cancer or other diseases, and <a href="http://news.bbc.co.uk/2/hi/americas/6160432.stm">the FDA has lifted its ban.</a> When the data were collected, they showed that women with silicone breast implants did not suffer cancer or any other disease at a significantly higher rate than women who had not had implants.<br />
<br />
The public fear that cellphones might be causing brain tumors was first aroused not by scientists but by a talk show host. On January 23, 1993, Larry King's guest was David Reynard, who announced that he and his wife Susan had sued NEC and GTE on the grounds that the cellphone David gave Susan caused his wife's brain tumor. There was nothing but junk science to back up her claim, plus the fact that the tumor appeared near where she held the phone to her ear. She was diagnosed seven months after receiving her phone and died a few months after filing the suit. The suit was dismissed in 1995. A dozen similar lawsuits followed; all were dismissed (<a href="http://www.technologyreview.com/business/12224/">1</a>,<a href="http://jnci.oxfordjournals.org/cgi/content/full/93/3/166">2</a>,<a href="http://journal.media-culture.org.au/0106/cell.php">3).</a> For those who think scientists and industries don't take anecdotes seriously, consider this: soon after Susan's lawsuit was dismissed the cellphone industry committed $25 million for safety studies. Many studies have been conducted over the past fifteen years and <a href="http://skepdic.com/skeptimedia/skeptimedia66.html">so far no evidence of a causal link between cellphones and brain cancer has been found.</a><br />
<br />
The vaccination rate in many places has dropped significantly in many parts of the world. In my northern California university town, 40% of the kindergartners at the Davis Waldorf school are unvaccinated. In the Sacramento area, there has been a 34% increase in "personal-belief exemptions" from state-required vaccinations for kindergartners over the past four years. Statewide, the increase in waivers has been 37% over the same period. The greatest decline in vaccination rates has occurred among the wealthier and more educated segments of society (<a href="http://www.sacbee.com/2013/01/06/5094734/child-vaccination-rates-fall.html#storylink=misearch#storylink=cpy">"Child vaccine rates fall," <i>Sacramento Bee</i>, 1/6/2013</a>), not because of scientific evidence that vaccines are harmful but mainly because of fear caused in large part by <a href="http://www.skepdic.com/antivaccination.html">anecdotes of children getting autism and other neurological disorders from vaccinations</a>. Oprah Winfrey, for example, <a href="http://www.skepdic.com/skeptimedia/skeptimedia44.html">responded to a systematic letter campaign from parents of kids with autism</a> to feature on her show actress Jenny McCarthy and others to share their stories about getting a vaccination and then being diagnosed with autism. This kind of <a href="http://59ways.blogspot.com/2012/06/post-hoc-fallacy.html">post hoc reasoning</a> is common among those who believe anecdotes are more trustworthy than scientific studies. <a href="http://www.skepdic.com/antivaccination.html">Scientific studies have repeatedly found no causal connection between vaccines and autism or serious neurological disorders. </a>The benefits to members of society from universal vaccination against communicable diseases such as measles, mumps, polio, and diphtheria far outweigh any potential harm that might happen to somebody somewhere under some circumstances.<br />
<br />
The fear of vaccines has led to outbreaks of measles and deaths to infants from whooping cough, events that should not be happening in this day and age. In Japan, when the vaccination rate for pertussis (whooping cough) dropped 70% from 1974 to 1976, the number of cases of pertussis went from 393 to more than 13,000 and the number of deaths from pertussis went from 0 to 41.<a href="http://www.cdc.gov/vaccines/vac-gen/why.htm">*</a><br />
<br />
Testimonials and vivid anecdotes are unreliable for various reasons. Stories are prone to contamination by beliefs, later experiences, feedback, selective attention to details, and so on. Most stories get distorted in the telling and the retelling. Events get exaggerated. Time sequences get confused. Details get muddled. <a href="http://www.skepdic.com/memory.html">Memories</a> are imperfect and selective; they are often filled in after the fact. People misinterpret their experiences and are biased and selective in what interpretations they include and exclude from consideration. Experiences are conditioned by biases, memories, and beliefs, so people's perceptions might not be accurate. Most people aren't expecting to be deceived, so they may not be aware of deceptions that others might engage in. Some people make up stories. Some stories are delusions. Sometimes events are inappropriately deemed improbable when they might not be that improbable after all. In short, anecdotes are inherently problematic and are usually impossible to test for accuracy.<br />
<br />
Some fields rely almost exclusively on anecdotal evidence and testimonials, e.g., alternative medicine, the paranormal, the supernatural, and the pseudoscientific. Stories of personal experience with acupuncture, mediums, ghosts of relatives, or free energy machines have little scientific value. Sincere and vivid accounts of one’s encounter with an <a href="http://www.skepdic.com/angels.html"> angel</a> or the Virgin Mary, an alien, a <a href="http://www.skepdic.com/bigfoot.html">Bigfoot</a>, a child claiming to have lived before, purple <a href="http://www.skepdic.com/auras.html">auras</a> around dying patients, a miraculous <a href="http://www.skepdic.com/dowsing.html">dowser</a>, a <a href="http://www.skepdic.com/levitat.html">levitating</a> guru, or a <a href="http://www.skepdic.com/psurgery.html">psychic surgeon</a> are of little value in establishing the reasonableness of believing in such matters. If others cannot experience the same thing under the same conditions, then there will be no way to verify the experience. If there is no way to test the claim made, then there will be no way to tell if the experience was interpreted correctly. If others can experience the same thing, then it is possible to make a test of the testimonial and determine whether the claim based on it is worthy of belief. As parapsychologist <a href="http://skepdic.com/tart.html">Charles Tart</a> once said after reporting an anecdote of a possibly paranormal event: “Let’s take this into the laboratory, where we can know exactly what conditions were. We don’t have to hear a story told years later and hope that it was accurate.” Another parapsychologist, Dean Radin, also noted that anecdotes aren't good proof of the paranormal because memory “is much more fallible than most people think” and eyewitness testimony “is easily distorted” (<a href="http://www.skepdic.com/refuge/radin1.html">Radin 1997: 32</a>).<br />
<br />
Testimonials are of little use to science because <a href="http://www.skepdic.com/selectiv.html">selective thinking</a> and <a href="http://www.skepdic.com/selfdeception.html">self-deception</a> can't be controlled for or mitigated in randomly experienced events as they must be in scientific observations and experiments. Most <a href="http://www.skepdic.com/psychic.html">psychics</a> and dowsers, for example, do not even realize that they need to do <a href="http://59ways.blogspot.com/2012/12/control-group-study.html">controlled tests</a> of their powers to rule out the possibility that they are deceiving themselves. They are satisfied that their experiences provide them with enough positive feedback to justify the belief in their paranormal abilities. It is common for psychics, dowsers, and their admirers to remember their apparent successes and ignore or underplay their failures. Controlled tests can also determine whether other factors such as cheating might be involved. <br />
<br />
If testimonials are scientifically worthless, why are they so popular and why are they so convincing? There are several reasons. Testimonials are often vivid and detailed, making the coincidental seem meaningful and giving a causal interpretation more credibility than it deserves. They are often made by enthusiastic people who seem trustworthy and honest and who seem to lack any reason to deceive us. Sometimes a testimonial is given soon after an experience while one’s mood is
still elevated from the desire for a positive outcome. The experience
and the testimonial it elicits are given more significance than they
deserve and are of little
value in establishing the probability of the claims they are put forth
to support. Testimonials are often made by people with a semblance of authority, such as those who wear a uniform or hold a Ph.D. or M.D. degree. Testimonials are often made by popular figures given a bully pulpit on widely viewed television programs. To some extent, testimonials are believable because people want to believe them. Testimonials accompanied by claims of government or Big Pharma conspiracies to stifle a new cancer cure or free energy device are popular among a certain class of people. <br />
<br />
Testimonials and anecdotes are used to support claims in many fields, including medical science. Giving due
consideration to such testimonials is considered wise, not foolish. A
physician will use the testimonies of his or her patients to draw
conclusions about certain medications or procedures. For example, a
physician will take anecdotal evidence from a patient about a reaction
to a new medication and use that information in deciding to adjust the
prescribed dosage or to change the medication. This is quite reasonable.
But the physician cannot be selective in listening to testimony,
listening only to those claims that fit his or her own prejudices. To do
so is to risk harming one’s patients. Nor should the rest of us be
selective when listening to testimonials regarding some experience. <br />
<div style="text-align: left;">
<span style="font-size: small;"><br /></span></div>
<span style="font-size: small;"><b><span style="font-family: Arial;">sources</span></b></span>
<br />
<div style="text-align: left;">
<span style="font-family: Arial; font-size: small;"></span></div>
<div style="text-align: left;">
<span style="font-size: small;"><br /></span></div>
<a href="http://59ways.blogspot.com/p/about-book.html">Carroll, Robert Todd. 2012. <i>Unnatural Acts: Critical Thinking, Skepticism, and Science Exposed!</i> James Randi Educational Foundation.</a><br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0155016253/roberttoddcarrolA/">Giere, Ronald, <i>1998. Understanding Scientific Reasoning</i>, 4th ed, Holt Rinehart, Winston.</a><br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0534524702/roberttoddcarrolA/">Kahane, Howard. 1997. <i>Logic and Contemporary Rhetoric: The Use of Reason in Everyday Life</i>, 8th edition. Wadsworth.</a>Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com144tag:blogger.com,1999:blog-8778506545780706793.post-5677460351281579202012-12-31T07:41:00.002-08:002012-12-31T16:43:12.203-08:00attribution biasesHuman behavior can be understood as issuing from "internal" factors or personal characteristics--such as motives, intentions, or personality traits--and from "external" factors--such as the physical or social environment and other factors deemed out of one's personal control. Self-serving creatures that we are, we tend to attribute our own successes to our intelligence, knowledge, skill, perseverance, and other positive personal traits. Our failures are blamed on bad luck, sabotage by others, a lost lucky charm, and other such things. These <i>attribution biases</i> are referred to as the <i>dispositional attribution</i><i> bias</i> and the <i>situational attribution</i><i> bias</i>. They are applied in reverse when we try to explain the actions of others. Others succeed because they're lucky or have connections and they fail because they're stupid, wicked, or lazy.<br />
<a name='more'></a><br />
<br />
We may tend to attribute the behaviors of others to their intentions
because it is cognitively easier to do so. We often have no idea about
the situational factors that might influence another person or cause
them to do what they do. We can usually easily imagine, however, a
personal motive or personality trait that could account for most human
actions. We usually have little difficulty in seeing when situational factors are
at play in affecting our own behavior. In fact, people tend to
over-emphasize the role of the situation in their own behaviors
and under-emphasize the role of their own personal motives or
personality traits. Social psychologists refer to this tendency as the <i>actor-observer bias.</i><br />
<br />
One lesson here is that we should be careful when interpreting the
behavior of others. What might appear to be laziness, dishonesty, or
stupidity might be better explained by situational factors of which we
are ignorant. Another lesson is that we might be giving ourselves more
credit for our actions than we deserve. The situation may have driven us
more than we admit. Maybe we "just did what anybody would do in that
situation" or maybe we were just lucky. We may want to follow the classical Greek maxim "know thyself," but modern neuroscience has awakened us to the fact that much of our thinking goes on at the unconscious level and we often don't know what is really motivating us to do what we do or think what we think.<br />
<br />
Something similar to the self-serving attribution of positive traits to explain our own behavior and negative traits to explain the behavior of others occurs with regard to beliefs. Michael Shermer and Frank Sulloway identified a kind of
attribution error while doing a survey on why people believe in a god.
They found that most people attributed their own belief in a god to rational
inference or personal experience ("the universe is so well designed," "I experience god daily") while the majority attributed the belief of others to
emotional need ("the belief comforts them," "believing makes it easier to face death"). The <i>intellectual attribution bias</i> finds a rational basis for one's own beliefs, while the <i>emotional attribution bias</i>
finds an emotional basis for the beliefs of others. There is also an implicit value judgment here: having a rational motive is superior to having an emotional one.<br />
<br />
Shermer (<a href="http://www.amazon.com/exec/obidos/ISBN=0805091254/roberttoddcarrolA/">2011</a>)
claims these biases are also found in political beliefs. On gun control,
for example, both liberals and conservatives think their own positions
are rationally based. Liberals see their opponents' beliefs as due to their
heartlessness and emotional attachment to weapons; conservatives see
liberals' beliefs as due to their bleeding heart soft-headedness. For example: <i>Only sane people think a person does not need a hidden weapon. Only
people who have low self esteem or need something to make
them feel grown up need to hide his/her weapon. </i>And the reply: <i>Why is it non-gunners all seem to feel that carrying a gun is an ego booster or an act only a paranoid person would do? </i>Or, <i>liberals cry for gun control every time somebody's killed with a gun; their gut tells them gun control will make the world a safer place. Right. And pigs can fly.</i><br />
<br />
<a href="http://www.radford.edu/%7Ejaspelme/443/spring-2007/Articles/Jones_n_Harris_1967.pdf">Edward E. Jones and Victor Harris</a>
(1967), building on the work of Austrian psychologist <a href="http://en.wikipedia.org/wiki/Fritz_Heider">Fritz Heider</a>
(1958), called the tendency of people to attribute another person's
behavior to personal characteristics--even when the person's behavior is
most likely the result of situational demand--<i>correspondence bias</i>. Social psychologist <a href="http://en.wikipedia.org/wiki/Lee_Ross">Lee Ross</a> coined the expression "fundamental attribution error" to describe the tendency to see the behavior of others in terms of personal characteristics rather than considering that the situation they are in may have been more significant in determining their actions.<br />
<br />
Ross is also known for his work with Robert Vallone and Mark Lepper and
their discovery that people with strong biases toward an issue perceive
media
coverage as biased against their opinions even when the bias cannot be
attributed to bias in the media report. They discovered this by
presenting the same news reports to people with strong, but opposing,
biases and finding that both sides considered the media reports biased
against their side and biased in favor of the other side. They called
this the <i><a href="http://en.wikipedia.org/wiki/Hostile_media_effect">hostile media effect</a>. </i>Something similar happens in team sporting events: fans for both teams see referee bias against their side and in favor of the other side. We might call this the <i>hostile referee effect.</i> <br />
<br />
<b>Sources </b><br />
<b> </b><i> </i><br />
Heider, Fritz. 1958. <i>The Psychology of Interpersonal Relations</i>. Wiley.<br />
<br />
Jones, Edward E. and Victor A. Harris. 1967. The Attribution of Attitudes. <i>Journal of Experimental Social Psychology</i>. 3, 1-24.<br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0805091254/roberttoddcarrolA/">Shermer, Michael. 2011. <i>The Believing Brain: From Ghosts and Gods to Politics and Conspiracies---How We Construct Beliefs and Reinforce Them as Truths</i>. Times Books.</a><br />
<br />
Vallone, Robert P., Lee Ross and Mark R. Lepper. 1985. The hostile media phenomenon: Biased Perception and Perceptions of Media Bias in Coverage of the "Beirut Massacre." <i>Journal of Personality and Social Psychology</i>, 49, 577-585.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com196tag:blogger.com,1999:blog-8778506545780706793.post-33715748376841957122012-12-24T05:00:00.000-08:002012-12-24T17:11:23.522-08:00control group studyA <i>control group study</i> uses a control group to compare to an experimental group in a test of a causal hypothesis. The control and experimental groups must be identical in all relevant ways except for the introduction of a suspected causal agent into the experimental group. If the suspected causal agent is actually a causal factor of some event, then logic dictates that that event should manifest itself more significantly in the experimental than in the control group. For example, if 'C' causes 'E', when we introduce 'C' into the experimental group but not into the control group, we should find 'E' occurring in the experimental group at a significantly greater rate than in the control group. Significance is measured by relation to chance: if an event is not likely due to chance, then its occurrence is <i>statistically </i>significant. Being statistically significant is not the same as being <i>important</i>. It means, again, that the results are not likely due to chance. That’s all it means. <br />
<br />
A double-blind test is a control group test where neither the evaluator nor the subject knows which items are controls. A randomized test is one that randomly assigns items to the control and the experimental groups. Whenever possible, a control group study should randomly assign members to the control and experimental groups. This reduces the chance of biasing the study. <br />
<br />
The purpose of controls, double-blind, and randomized testing is to reduce error, self-deception, and bias. An example should clarify the necessity of these safeguards. <br />
<a name='more'></a><br />
<br />
<div style="text-align: center;">
<b>testing the DKL LifeGuard </b></div>
<br />
The DKL LifeGuard Model 2 from DielectroKinetic Laboratories can allegedly detect a living human being by receiving a signal from the heartbeat at distances of up to 20 meters through any material. So say the manufacturers of the device. Sandia Labs tested the LifeGuard 2 using a double-blind, randomized method of testing. Sandia is a national security laboratory operated for the U.S. Department of Energy by the Sandia Corporation, a Lockheed Martin Co. The causal hypothesis they tested could be worded as follows: <i>the human heartbeat causes a directional signal to activate in the Lifeguard, thereby allowing the user of the LifeGuard to find a hidden human being (the target) up to 20 meters away, regardless of what objects might be between the LifeGuard and the target. </i><br />
<br />
The testing procedure was quite simple: five large plastic packing crates were set up in a line at 30-foot intervals. The test operator, using the DKL LifeGuard Model 2, tried to detect in which of the five crates a human being was hiding. Whether a crate would be empty or contain a person for each of the twenty-five trials was determined by random assignment. This is to avoid using a pattern that might be detected by the subject. <br />
<br />
Tests showed that the device performed no better than expected from random chance. The test operator was a DKL representative. The only time the test operator did well in detecting his targets was when he had prior knowledge of the target's location. The LifeGuard was successful ten out of ten times when the operator knew where the target was. It may seem ludicrous to test the device by telling the operator where the objects are, but it establishes a baseline and affirms that the device is working. Only when the operator agrees that his device is working should the test proceed to the second stage, the double-blind test. The operator will not be as likely to come up with an <a href="http://59ways.blogspot.com/2011/12/ad-hoc-hypothesis_26.html">ad hoc hypothesis</a> to explain away any failures in a double-blind test if he has agreed beforehand that the device is working properly. <br />
<br />
If the device could perform as claimed, the operator should have received no signals from the empty crates and signals from each of the crates with a person within. In the main test of the LifeGuard—when neither the test operator nor the investigator keeping track of the operator's results knew which of five possible locations contained the target—the operator performed poorly (six out of 25) and took about four times longer than when the operator knew the target's location. If human heartbeats cause the device to activate, one would expect a significantly better performance than 6 of 25, which is about what would be expected by chance. <br />
<br />
<div style="text-align: center;">
<b>testing dowsers </b></div>
<br />
The different performances—10 correct out of 10 tries versus 6 correct out of 25 tries—vividly illustrate the need for keeping the subject blind to the controls: it is needed to eliminate self-deception and <a href="http://59ways.blogspot.com/2012/05/subjective-validation.html">subjective validation</a>. The evaluator is kept blind to the controls to prevent him or her from subtly tipping off the subject, either knowingly or unknowingly. If the evaluator knew which crates were empty and which had persons, he or she might give a visual signal to the subject by looking only at the crates with persons. To eliminate the possibility of cheating or evaluator bias, the evaluator is kept in the dark regarding the controls. <br />
<br />
The lack of testing under controlled conditions explains why many psychics, graphologists, astrologers, dowsers, New Age therapists, and the like, believe in their abilities. To test a dowser it is not enough to have the dowser and his friends tell you that it works by pointing out all the wells that have been dug on the dowser's advice. One should perform a random, double-blind test, such as the one done by Ray Hyman with an experienced dowser on the PBS program <i>Frontiers of Science</i> (Nov. 19, 1997). The dowser claimed he could find buried metal objects as well as water. He agreed to a test that involved randomly selecting numbers that corresponded to buckets placed upside down in a field. The numbers determined which buckets a metal object would be placed under. The one doing the placing of the objects was not the same person who went around with the dowser as he tried to find the objects. The exact odds of finding a metal object by chance could be calculated. For example, if there are 100 buckets and 10 of them have a metal object, then getting 10% correct would be predicted by chance. That is, over a large number of attempts, getting about 10% correct would be expected of anyone, with or without a dowsing rod. On the other hand, if someone consistently got 80% or 90% correct, and we were sure he or she was not cheating, that would confirm the dowser's powers. <br />
<br />
The dowser walked up and down the lines of buckets with his rod but said he couldn't get any strong readings. When he selected a bucket he qualified his selection with something to the effect that he didn't think he'd be right. He was right about never being right! He didn't find a single metal object despite several attempts. His performance is typical of dowsers tested under controlled conditions. His response was also typical: he was genuinely surprised. Like most of us, the dowser is not aware of the many factors that can hinder us from doing a proper evaluation of events: self-deception, wishful thinking, suggestion, unconscious bias, selective thinking, subjective validation, communal reinforcement, and the like. <br />
<br />
<div style="text-align: center;">
<b>placebo studies </b></div>
<br />
Many control group studies use a <a href="http://59ways.blogspot.com/search/label/conditioning">placebo</a> in control groups to keep the subjects in the dark as to whether they are being given the causal agent that is being tested. For example, both the control and experimental groups will be given identical looking pills in a study testing the effectiveness of a new drug. Only one pill will contain the agent being tested; the other pill will be a placebo. In a double-blind study, the evaluator of the results would not know which subjects got the placebo until his or her evaluation of observed results was completed. This is to avoid evaluator bias from influencing observations and measurements. <br />
<br />
The first use of control groups in medicine is attributed to Dr. James Lind (1716-1794) who discovered a relationship between citrus fruit and scurvy, a disease that killed many more sailors than died of battle wounds in the 18th century. Lind compared six treatments on sailors with scurvy. Those given lemons and oranges were almost symptom free within a week. The others sailors in the study didn't fare so well, though those given cider improved slightly. For more on the history of the randomized control study see <a href="http://www.amazon.com/exec/obidos/ISBN=0393066614/roberttoddcarrolA/"><i>Trick or Treatment: The Undeniable Facts about Alternative Medicine</i> (2008) by Edzard Ernst and Simon Singh</a>. <br />
<br />
Of course, Lind did not know that vitamin C was the necessary nutrient in the citrus fruit that was preventing scurvy. In fact, he believed that the cause of scurvy was "incompletely digested food building up toxins within the body" (<a href="http://www.amazon.com/exec/obidos/ISBN=0767919394/roberttoddcarrolA/">Bryson 2010</a>). Lind's controlled experiment showed that there was something vital in oranges and lemons that prevented scurvy. His view of what caused scurvy indicates that he still adhered to the belief that disease is caused by internal toxins that needed to be expelled, a popular belief among medical experts from antiquity through the 19th century. Only quacks still maintain the belief that toxins in the body cause disease and the only cure is to expel them. <br />
<br />
The long road from Lind's experiment to a complete understanding of the role of ascorbic acid in nutrition involved the work of many scientists over many years. It would not have been possible to conceive that food itself contains nutrients necessary to avoid specific diseases when one believed that all disease is due to internal bad humors or toxins that need to be expelled. Had Lind lived in a later age (but maintained his belief in the internal toxin theory of disease) where it would have been possible to determine the level of toxins in scurvy victims, he might have thought his belief validated if he found toxins in scurvy victims. However, if there were such toxins, they could have been the effect of scurvy, or the effect of something altogether unrelated to the scurvy. <br />
<br />
As late as the early 20th century, the leading medical textbook of the day attributed scurvy to "insanitary surroundings, overwork, mental depression and exposure to cold and damp" (Bryson 2010). The medical textbook reflects what is called the <i>miasma theory of disease</i>, which was also very popular in the 19th century. <br />
<br />
In 1917, E. V. McCollum, who coined the terms 'vitamin A' and 'vitamin B', declared that scurvy was caused by constipation (Bryson 2010). McCollum, who was one of the leading nutritionists of his day, seems to have adhered to the toxic buildup theory, the one that led to so much death and destruction over several centuries in the form of bloodletting. Still, McCollum represents an advancement. Who wouldn't prefer a laxative to bloodletting? <br />
<br />
<div style="text-align: center;">
<b>non-control group studies </b></div>
<br />
Dr. Alan Hirsch claims to be "The World Expert In Smell & Taste." He is an M.D.—a psychiatrist, in fact—who developed some magical crystals that will "help you reduce your appetite and food cravings." You can read all about his crystals, which he calls SprinkleThin™, on his website (which has been taken down, but you can see what it looked like at <a href="http://web.archive.org/web/20080225043445/http://www.scienceofsmell.com/">http://web.archive.org/web/20080225043445/http://www.scienceofsmell.com/</a>). On July 25, 2005, I found the following testimonial on that website. <br />
<br />
<div style="text-align: center;">
<b>Dateline NBC Investigates SprinkleThin</b></div>
<br />
<blockquote class="tr_bq">
“What Dr. Hirsch discovered might surprise you. (Certain smells) seem to control appetite. Dr. Hirsch studied 2,700 people over six months, like the six people we met. They tried just about every diet imaginable. Dr. Hirsch brought along with him these special, non-caloric, scented crystals and asked the six to sprinkle it on their food. <br />
<br />
All the participants kept a video diary for <i>Dateline </i>to prove they were using the product. At the end of three months when we checked in on them, they were all losing weight.” </blockquote>
<br />
What is wrong with <i>Dateline</i>'s investigation? Among other things, <i>Dateline </i>did not have a control group. Dr. Hirsch says he has been studying eating behavior and weight loss for 25 years. He says he has done many studies, but if his studies were like <i>Dateline</i>'s study they are not of much scientific value. <br />
<br />
A well-designed study on the diet crystals would use a control group. Using a control group wouldn’t eliminate all problems with a study on weight loss, but it would reduce them. Weight loss is affected by many factors (motivation, eating behavior, amount of activity—especially exercise—overall health, metabolism, stress, and so on) and experimenters can't lock up humans in cages to make sure they do what they're supposed to do for the study. But, at the very least, a well-designed scientific study should use a control group and try to match the members of that group to those in the experimental group for factors that might have a significant effect on the outcome. For example, if you were doing a study that was testing whether prayer has an effect on the longevity of patients dying of AIDS, you should make sure that the ages of the subjects in both groups match up. It would not be a fair study to have 60-year-olds in one group and twenty-somethings in the other group. <br />
<br />
Without a control group, a scientist can't be sure that the diet crystals contributed significantly to the weight loss or, if they did, in what way. The placebo effect may be at work here: dieters may believe these crystals really affect their sense of taste and smell to such a degree that their appetites are suppressed. They may be deceiving themselves, but the crystals help them anyway. However, powdered beetle dung might have had the same effect. The diet scientist doesn't just want to help people lose weight. If a product works, she wants to know why it works. <br />
<br />
<i>Dateline </i>(and Dr. Hirsch) should not just give the crystals to dieters and observe whether they lose weight. They should have a group of similar people who want to lose weight and give them a placebo, a substance that looks like the diet crystals and is ingested in exactly the same way, but which is inert. They should agree to study the two groups for a set length of time, long enough for any diet to show results (several weeks, at least). At the end of the study they would compare the weight loss of the two groups. If the experimental group shows a significantly greater weight loss than the control group, then the scientists have good evidence that the crystals might be effective. <br />
<br />
Having a control group is necessary but it is not sufficient for having a well-designed control group study. The study must use an adequate number of participants. Six people would not be adequate for a control group study. Several hundred would be a better number. Why? With only six people, all it takes is one participant to do really well to elevate the average of the group significantly above the average of the other group. But this one person's success might be a fluke. By having a larger sample, the researcher reduces the chances that a few fluky individuals have skewed the results. <br />
<br />
Another way to reduce the chances of fluky results is to randomly assign subjects to the control and experimental groups. Randomization is very important to reduce the chances of biasing the samples. If highly motivated folks are placed in the diet crystal group and a bunch of lazy couch potatoes are in the control group, the results of the study would be biased. It is important that a method of true randomization be used, such as a random number table. You might think that assigning all the dark-haired subjects to one group and the light-haired subjects to the other would be sufficient to avoid having biased groups, but you cannot be sure that there is not something about hair color that is related to a person's weight. It is unlikely, but a scientist should not go with hunches in matters such as randomization. <br />
<br />
It is also important that the subjects in this study not know whether they have been given the magic crystals or the placebo. There is much controversy regarding the ethics of deceiving subjects, but from a scientific point of view it might be better if the subjects didn't even know that the study is about weight loss. If they think, for example, that the study is testing the effectiveness of a new blood pressure medicine, you would eliminate such things as motivation to lose weight or belief that the crystals are appetite suppressants as possible causes of any weight loss achieved. However, many, if not most, scientists argue that it is unethical to deceive participants in scientific studies. The subjects in a study don't need to be told which group they are in, but they should be told that they have been randomly assigned to their group and that at the end of the study they will be told which group they were in. (In some studies, participants will know which group they're in by obvious facts, e.g., the control group folks would know they're in the control group of a study testing various methods to reduce blood pressure if they're told to do nothing special and just come in for regular blood pressure measurements.)<br />
<br />
The kind of control group study described above is known as a <i>parallel group study</i>. However, as Dr. Gerard Dallal (2000) writes: "It takes little experience with parallel group studies to recognize the potential for great gains in efficiency if each subject could receive both treatments. The comparison of treatments would no longer be contaminated by the variability between subjects since the comparison is carried out within each individual." Such studies are known as <i>crossover studies</i>. They are highly recommended. In a crossover study, at the midway point in the study, members of the control group would now be given the active item being tested (i.e., they would now become the experimental group) and the members of the experimental group would now be given the placebo.<br />
<br />
Had Dr. Hirsch done a double-blind study, an assistant might have randomly assigned the subjects to their groups and kept a record of who is in which group. Dr. Hirsh or another assistant might have weighed all the subjects and kept weight records for each participant. After all the data had been collected, Dr. Hirsch would "unblind" the study and the data for the two groups compared. <br />
<br />
The final step in a well-designed study is the analysis of the data. You might think that the scientists should be able to look at the results and see right away whether the crystals did any good. This would only be true if, say, there were hundreds in each group and the experimental group lost 50 pounds each on average, while the control group gained 2 pounds. If the study had been designed properly, such results would be extremely unlikely to be a fluke. But what if the experimental group lost 2% more weight than the control group? Would that be statistically significant? To answer that question, scientists revert to statistical formulae. By some formula, a 2% weight loss might be statistically significant. If, however, a 2% weight loss meant 4 ounces over six weeks, most of us would say that even if this is statistically significant it is not important and not worth the money or the risk to use these crystals. The crystals might have some wicked side effect that hasn't yet been discovered. <br />
<br />
The moral of this story is that while testimonials of six people who use crystals and lose weight might have a powerful effect on a television audience, a critical thinker should recognize that without a well-designed control group study, such testimonials do not have much scientific value. <br />
<br />
A critical thinker also knows that information should be put in the proper context, which requires a certain amount of background knowledge. For example, you should know that many well-designed scientific studies get significant results that cannot be replicated at all or in a consistent fashion. If there is a causal relationship between diet crystals and losing weight, it should not work sporadically but consistently, unless, of course, there are so many factors that affect body weight as to make it nearly impossible to isolate the true effectiveness of a single item. In any case, a single study, no matter how well designed or how significant the results, rarely justifies drawing strong conclusions about causal relationships. <br />
<br />
Finally, as mentioned above, there might be some deleterious side effect of these crystals that has not yet been discovered. SprinkleThin™ might help you lose weight but if it kills you in the process, what have you gained? <br />
<br />
<b>Sources </b><br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0767919394/roberttoddcarrolA/">Bryson, Bill. 2010. <i>At Home: A Short History of Private Life</i>. Doubleday</a>. <br />
<br />
Dallal, Gerard E. Ph.D. 2000. The Computer-Aided Analysis of Crossover Studies. <<a href="http://www.jerrydallal.com/LHSP/crossovr.htm">http://www.jerrydallal.com/LHSP/crossovr.htm</a>>, accessed 12/20/2012. <br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0393066614/roberttoddcarrolA/">Ernst, Edzard and Simon Singh. 2008. <i>Trick or Treatment: The Undeniable Facts about Alternative Medicine</i>. W. W. Norton & Company. </a><br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0155016253/roberttoddcarrolA/">Giere, Ronald. 1998. <i>Understanding Scientific Reasoning</i>, 4th ed. Holt Rinehart, Winston. </a><br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=053452530X/roberttoddcarrolA/">Kourany, Janet A. 1998. <i>Scientific Knowledge: Basic Issues in the Philosophy of Science</i>, 2nd ed. Wadsworth Publishing Co. </a><br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0345409469/roberttoddcarrolA/">Sagan, Carl. 1995. <i>The Demon-Haunted World: Science as a Candle in the Dark</i>. Random House. </a>Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com12tag:blogger.com,1999:blog-8778506545780706793.post-31236881332491719982012-12-17T06:00:00.000-08:002012-12-17T08:10:29.112-08:00false memories<i>A false memory </i>is a memory that is a distortion of an actual experience or a <a href="http://59ways.blogspot.com/2012/09/confabulation.html">confabulation</a> of an imagined one. Many false memories involve confusing or mixing fragments of memory events, some of which may have happened at different times but which are remembered as occurring together. Many false memories involve an error in source memory. Some involve treating dreams as if they were playbacks of real experiences. Still other false memories are believed to be the result of the prodding, leading, and suggestions of therapists and counselors. Dr. Elizabeth Loftus has shown not only that it is possible to implant false memories, but that it is relatively easy to do so (Loftus 1994).<br />
<br />
<div style="text-align: center;">
<b>source memory</b></div>
<br />
A memory of your mother throwing a glass of milk on your father when in fact it was your father who threw the milk is a false memory based on an actual experience. You may remember the event vividly and be able to "see" the action clearly, but only corroboration by those present can determine whether your memory of the event is accurate. Distortions such as switching the roles of people in one's memory are quite common. Some distortions are quite dramatic, as you will see from the examples given below. <br />
<a name='more'></a><br />
<br />
Many people have vivid and substantially accurate memories of events that are erroneous in one key aspect: the source of the memory. For example:<br />
<blockquote class="tr_bq">
In the 1980 presidential campaign, Ronald Reagan repeatedly told a heartbreaking story of a World War II bomber pilot who ordered his crew to bail out after his plane had been seriously damaged by an enemy hit. His young belly gunner was wounded so seriously that he was unable to evacuate the bomber. Reagan could barely hold back his tears as he uttered the pilot's heroic response: "Never mind. We'll ride it down together." ...this story was an almost exact duplicate of a scene in the 1944 film A Wing and a Prayer. Reagan had apparently retained the facts but forgotten their source (Schacter 1996, p. 287).</blockquote>
An even more dramatic case of <i>source amnesia</i> (also called <i>memory misattribution</i>) is that of the woman who accused memory expert Dr. Donald Thompson of having raped her. Thompson was doing a live interview for a television program just before the rape occurred. The woman had seen the program and "apparently confused her memory of him from the television screen with her memory of the rapist" (Schacter 1996, p. 114). Studies by Marcia Johnson et al. (1979) have shown that the ability to distinguish memory from imagination depends on the recall of source information. <br />
<br />
Tom Kessinger, a mechanic at Elliott's Body Shop in Junction City, Kansas, gave a detailed description of two men he said had rented a Ryder truck like the one used in the Oklahoma City bombing in 1995 of the Alfred P. Murrah Federal Building. One of the men looked like Timothy McVeigh, who was later executed for the murder of 168 people including 19 children under the age of 6. The other wore a baseball cap and a T-shirt and had a tattoo above the elbow on his left arm. That was Todd Bunting, who had rented a truck the day before McVeigh. Kessinger mixed the two memories but was absolutely certain the two came in together. <br />
<br />
Jean Piaget, the great child psychologist, claimed that his earliest memory was of nearly being kidnapped at the age of two. He remembered details such as sitting in his baby carriage, watching the nurse defend herself against the kidnapper, scratches on the nurse's face, and a police officer with a short cloak and a white baton chasing the kidnapper away. The story was reinforced by the nurse, the family, and others who had heard the story. Piaget was convinced that he remembered the event. However, it never happened. Thirteen years after the alleged kidnapping attempt, Piaget's former nurse wrote to his parents to confess that she had made up the entire story. Piaget later wrote: "I therefore must have heard, as a child, the account of this story...and projected it into the past in the form of a visual memory, which was a memory of a memory, but false" (Tavris 1993). <br />
<br />
Remembering being kidnapped when you were an infant (under the age of three) is a false memory almost by definition. The left inferior prefrontal lobe is undeveloped in infants but is required for long-term <a href="http://www.skepdic.com/memory.html">memory</a>. The elaborate encoding required for classifying and remembering such an event is very unlikely to occur in the infant's brain. <br />
<br />
The brains of infants and very young children are capable of storing <i>fragmented </i>memories, however. Fragmented memories can be disturbing in adults. Schacter notes the case of a rape victim who could not remember the rape, which took place on a brick pathway. The words brick and path kept popping into her mind, but she did not connect them to the rape. She became very upset when taken back to the scene of the rape, though she didn't remember what had happened there (Schacter 1996, p. 232). Whether a fragmented memory of infant abuse can cause significant psychological damage in the adult has not been scientifically established, though it seems to be widely believed by many psychotherapists. <br />
<br />
What is also widely believed by many psychotherapists is that many psychological disorders and problems are due to the repression of memories of childhood sexual abuse. On the other hand, many psychologists maintain that their colleagues doing repressed memory therapy (RMT) are encouraging, prodding, and suggesting false memories of abuse to their patients. Many of the recovered memories are of being sexually abused by parents, grandparents, and ministers. Many of those accused claim the memories are false and have sued therapists for their alleged role in creating false memories. <br />
<br />
It is as unlikely that all recovered memories of childhood sexual abuse are false as that they are all true. What is known about memory--especially that memories are constructions from both real and imagined experiences--should make us aware of how difficult it is to sort out true from distorted or false recollections. However, some consideration should be given to the fact that certain brain processes are necessary for any memories to occur. Thus, memories of infant abuse or of abuse that took place while one was unconscious are unlikely to be accurate. Memories that have been directed by dreams or hypnosis are notoriously unreliable. Dreams are not usually direct playbacks of experience. Furthermore, the data of dreams are generally ambiguous. Hypnosis and interrogation techniques must be used with care not to create memories by suggestion. <br />
<br />
Furthermore, memories are often mixed; some parts are accurate and some are not. Separating the two can be a chore under ordinary circumstances. A woman might have consciously repressed childhood sexual abuse by a neighbor or relative. Some experience in adulthood may serve as a retrieval cue and trigger a memory of the abuse. This disturbs her and her dreams. She has nightmares, but now it is her father or grandfather or priest who is abusing her. She enters RMT and within a few months she recalls vividly how her father, mother, grandfather, grandmother, or priest not only sexually abused her but engaged in horrific satanic rituals involving human sacrifices and cannibalism. Where does the truth lie? The patient's memories are real and horrible, even if false. The patient's suffering is real whether the memories are true or false. Families are destroyed whether the memories are true or false. <br />
<br />
Should such memories be taken at face value and accepted as true without any attempt to prove otherwise? Obviously it would be unconscionable to ignore accusations of sexual abuse. Likewise, it is unconscionable to be willing to see lives and families destroyed without at least trying to find out if any part of the memories of sexual abuse are false. It also seems inhumane to encourage patients to recall memories of sexual abuse unless one has a very good reason for doing so. Assuming all or most emotional problems are due to repressed memories of childhood sexual abuse is not a good enough reason to risk harming a patient by encouraging delusional beliefs and damaging familial relationships. A responsible therapist has a duty to help a patient sort out delusion from reality, dreams and confabulations from truth, and real abuse from imagined abuse. If good therapy means the encouragement of delusion as standard procedure, then good therapy may not always be worth it. <br />
<br />
Those who find that it is their duty to determine whether a person has been sexually abused or whether a memory of such abuse is a false memory should be well versed in the current scientific literature regarding memory. They should know that all of us are pliable and suggestible to some degree, but that children are especially vulnerable to suggestive and leading questioning. They should also remember that children are highly imaginative and that just because a child says he or she remembers something does not mean that he or she does. However, when children say they do <i>not </i>remember something, to keep questioning them until they do remember it is not good interrogation and borders on child abuse.<br />
<br />
Investigators, counselors, and therapists should also remind themselves that many charges and memories are heavily influenced by media coverage. People charged with or convicted of crimes have noticed that their chances of gaining sympathy increase if others believe they were abused as children. People with grudges have also noticed that nothing can destroy another person so quickly as being charged with sexual abuse, while at the same time providing the accuser with sympathy and comfort. Emotionally disturbed people are also influenced by what they read, see, or hear in the mass media, including stories of repressed abuse as the cause of emotional problems. An emotionally disturbed adult may accuse another adult of abusing a child, not because there is good evidence of abuse but because the disturbed person imagines or fears abuse. <br />
<br />
<b>Sources</b><br />
<br />
Carroll, Robert Todd. 2012. "<a href="http://www.skepdic.com/repress.html">Repressed Memory Therapy</a>," in <i>The Skeptic's Dictionary <</i>http://www.skepdic.com/repress.html<i>></i>, accessed 12/15/2012.<br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=1573920940/roberttoddcarrolA/">Baker, Robert A. 1992. <i>Hidden Memories: Voices and Visions From Within</i>. Prometheus Books.</a> <br />
<br />
Johnson, M.K., Raye, C.L., Wang, A.Y., & Taylor, T.H. 1979. Fact
and fantasy: The roles of accuracy and variability in confusing
imaginations with perceptual experiences. <cite>Journal of
Experimental Psychology: Human Learning and Memory, 5,</cite>
229-240.<br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0312141238/roberttoddcarrolA/">Loftus, Elizabeth. 1994. <i>The Myth of Repressed Memory</i>. St. Martin's.</a><br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0465075525/roberttoddcarrolA/">Schacter, Daniel L. 1996. <i>Searching for Memory - the brain, the mind, and the past</i>. Basic Books.</a><br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0618040196/roberttoddcarrolA/">Schacter, Daniel L. 2001. <i>The Seven Sins of Memory : How the Mind Forgets and Remembers</i>. Houghton Mifflin Co.</a><br />
<br />
Tavris, Carol. 1993. "Hysteria and the incest-survivor machine," <i>Sacramento Bee</i>, Forum section, January 17.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com10tag:blogger.com,1999:blog-8778506545780706793.post-20910851448768523362012-12-10T05:00:00.000-08:002012-12-11T16:01:17.263-08:00Forer effectThe Forer effect refers to the tendency of people to rate sets of statements as highly accurate for them personally even though the statements were not made about them personally and could apply to many people.<br />
<br />
Psychologist Bertram R. Forer (1914-2000) found that people tend to accept vague and general personality descriptions as uniquely applicable to themselves without realizing that the same description could be applied to many people. Consider the following as if it were given to you as an evaluation of your personality.
<br />
<blockquote class="tr_bq">
You have a need for other people to like and admire you, and yet you tend to be critical of yourself. While you have some personality weaknesses you are generally able to compensate for them. You have considerable unused capacity that you have not turned to your advantage. Disciplined and self-controlled on the outside, you tend to be worrisome and insecure on the inside. At times you have serious doubts as to whether you have made the right decision or done the right thing. You prefer a certain amount of change and variety and become dissatisfied when hemmed in by restrictions and limitations. You also pride yourself as an independent thinker; and do not accept others' statements without satisfactory proof. But you have found it unwise to be too frank in revealing yourself to others. At times you are extroverted, affable, and sociable, while at other times you are introverted, wary, and reserved. Some of your aspirations tend to be rather unrealistic. <br />
<a name='more'></a></blockquote>
Forer gave a personality test to his students, ignored their answers, and gave each student the above evaluation (taken from a newsstand astrology column). He asked them to evaluate the evaluation from 0 to 5, with "5" meaning the recipient felt the evaluation was an "excellent" assessment and "4" meaning the assessment was "good." The class average evaluation was 4.26. That was in 1948. The test has been repeated hundreds of time with psychology students and the average is still around 4.2 out of 5, or 84% accurate. <br />
<br />
In short, Forer convinced people he could successfully read their character. His accuracy amazed his subjects, though his personality analysis was taken from a newsstand astrology column and was presented to people without regard to their sun sign. The Forer effect seems to explain, in part at least, why so many people think that <a href="http://www.skepdic.com/pseudosc.html">pseudosciences</a> "work". <a href="http://www.skepdic.com/astrolgy.html">Astrology</a>, <a href="http://www.skepdic.com/astrotherapy.html">astrotherapy</a>, <a href="http://www.skepdic.com/biorhyth.html">biorhythms</a>, <a href="http://www.skepdic.com/cartoma.html">cartomancy</a>, <a href="http://www.skepdic.com/palmist.html">chiromancy</a>, the <a href="http://www.skepdic.com/enneagr.html">enneagram</a>, <a href="http://www.skepdic.com/divinati.html">fortune telling</a>, <a href="http://www.skepdic.com/graphol.html">graphology</a>, <a href="http://www.skepdic.com/rumpology.html">rumpology</a>, etc., seem to work because they seem to provide accurate personality analyses. Scientific studies of these pseudosciences demonstrate that they are not valid personality assessment tools, yet each has many satisfied customers who are convinced they are accurate. <br />
<br />
The most common explanations given to account for the Forer effect are in terms of hope, <a href="http://www.skepdic.com/wishfulthinking.html">wishful thinking</a>, vanity, and the tendency to try to make sense out of experience. Forer's own explanation was in terms of human gullibility. People tend to accept claims about themselves in proportion to their desire that the claims be true rather than in proportion to the empirical accuracy of the claims as measured by some non-subjective standard. We tend to accept questionable, even false statements about ourselves, if we deem them positive or flattering enough. We will often give very liberal interpretations to vague or inconsistent claims about ourselves in order to make sense out of the claims. Subjects who seek counseling from psychics, mediums, fortune tellers, mind readers, graphologists, etc., will often ignore false or questionable claims and, in many cases, by their own words or actions, will provide most of the information they erroneously attribute to a pseudoscientific counselor. Many such subjects often feel their counselors have provided them with profound and personal information. Such <a href="http://www.skepdic.com/subjectivevalidation.html">subjective validation</a>, however, is of little scientific value. <br />
<br />
Psychologist Barry Beyerstein believes that "hope and uncertainty evoke powerful psychological processes that keep all occult and pseudoscientific character readers in business." We are constantly trying "to make sense out of the barrage of disconnected information we face daily" and "we become so good at filling in to make a reasonable scenario out of disjointed input that we sometimes make sense out of nonsense." We will often fill in the blanks and provide a coherent picture of what we hear and see, even though a careful examination of the evidence would reveal that the data is vague, confusing, obscure, inconsistent or unintelligible. Psychic mediums, for example, will often ask so many disconnected and ambiguous questions in rapid succession that they give the impression of having access to personal knowledge about their subjects. In fact, the psychic need not have any insights into the subject's personal life for the subject will willingly and unknowingly provide all the associations and validations needed. Psychics are aided in this process by using <a href="http://www.skepdic.com/coldread.html">cold reading</a> techniques. <br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=1573927988/roberttoddcarrolA/">David Marks and Richard Kammann</a> argue:<br />
<blockquote class="tr_bq">
....once a belief or expectation is found, especially one that resolves uncomfortable uncertainty, it biases the observer to notice new information that confirms the belief, and to discount evidence to the contrary. This self-perpetuating mechanism consolidates the original error and builds up an overconfidence in which the arguments of opponents are seen as too fragmentary to undo the adopted belief.</blockquote>
Having a pseudoscientific counselor go over a character assessment with a client is wrought with snares that can easily lead the most well intentioned of persons into error and delusion. <br />
<br />
<a href="http://www.quackwatch.org/01QuackeryRelatedTopics/Tests/grapho.html">Barry Beyerstein</a> suggests the following test to determine whether the apparent validity of the pseudosciences mentioned above might not be due to the Forer effect, <a href="http://www.skepdic.com/confirmbias.html">confirmation bias</a>, or other psychological factors. (Note: the proposed test also uses subjective or personal validation and is not intended to test the accuracy of any personality assessment tool, but rather is intended to counteract the tendency to <a href="http://www.skepdic.com/selfdeception.html">self-deception</a> about such matters.)<br />
<blockquote class="tr_bq">
…a proper test would first have readings done for a large number of clients and then remove the names from the profiles (coding them so they could later be matched to their rightful owners). After all clients had read all of the anonymous personality sketches, each would be asked to pick the one that described him or her best. If the reader has actually included enough uniquely pertinent material, members of the group, on average, should be able to exceed chance in choosing their own from the pile.</blockquote>
<div style="text-align: left;">
Beyerstein notes that "no occult or pseudoscientific character reading method…has successfully passed such a test." </div>
<br />
The Forer effect, however, only partially explains why so many people accept as accurate occult and pseudoscientific character assessment procedures. <a href="http://www.skepdic.com/coldread.html">Cold reading</a>, <a href="http://www.skepdic.com/communalreinforcement.html">communal reinforcement</a>, and <a href="http://www.skepdic.com/selectiv.html">selective thinking</a> also underlie these delusions. Also, it should be admitted that while many of the assessment claims in a pseudoscientific reading are vague and general, some are specific. Some of those that are specific actually apply to large numbers of people and some, by chance, will be accurate descriptions of a select few. A certain number of specific assessment claims should be expected by chance. <br />
<br />
There have been numerous studies done on the Forer effect. Dickson and Kelly (1985) have examined many of these studies and concluded that overall there is significant support for the general claim that Forer profiles are generally perceived to be accurate by subjects in the studies (<span style="font-family: Arial; font-size: small;">D. H. </span><span style="font-family: Arial; font-size: small;">Dickson and I. W. Kelly<span style="font-size: small;">,</span> "The 'Barnum Effect' in Personality Assessment: A
Review of the Literature," <i>Psychological Reports</i>, 57, 367-382.</span>). Furthermore, there is an increased acceptance of the profile if it is labeled "for you." Favorable assessments are "more readily accepted as accurate descriptions of subjects' personalities than unfavorable" ones. But unfavorable claims are "more readily accepted when delivered by people with high perceived status than low perceived status." It has also been found that subjects can generally distinguish between statements that are accurate (but would be so for large numbers of people) and those that are unique (accurate for them but not applicable to most people). There is also some evidence that personality variables such as neuroticism, need for approval, and authoritarianism are positively related to belief in Forer-like profiles. Unfortunately, most Forer studies have been done only on college students.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com9tag:blogger.com,1999:blog-8778506545780706793.post-66096112939236611862012-12-03T06:00:00.000-08:002012-12-05T20:22:50.589-08:00optimistic biasThe <i>optimistic bias</i> is an expression used by Daniel Kahneman to describe the idea that "most of us view the world as more benign than it really is, our own attributes as more favorable than they truly are, and the goals we adopt as more achievable than they are likely to be." Furthermore, most of us have an unrealistic view about predicting the future: we think we're much better at it than we really are. Study after study has found that <a href="http://59ways.blogspot.com/2012/08/self-deception.html">self-deception</a> is pervasive: the vast majority of people think they
are above average, less biased, more congenial, less susceptible to
improper influence, and more competent than the majority of their peers.
<br />
<br />
Kahneman thinks that many of us suffer from (or are blessed with, depending on how you see things) "optimistic overconfidence." He writes that "the optimistic bias may well be the most significant of the cognitive biases. Because optimistic bias can be both a blessing and a risk, you should be both happy and wary if you are temperamentally optimistic." (<i>Thinking, Fast and Slow</i>, p. 255).<br />
<a name='more'></a><br />
<br />
Generally speaking, the optimistic bias is a good thing. Life is much more pleasant for an optimist than for a pessimist.<br />
<blockquote class="tr_bq">
Optimists are normally cheerful and happy, and therefore popular; they are resilient in adapting to failures and hardships, their chances of clinical depression are reduced, their immune system is stronger, they take better care of their health, they feel healthier than others and are in fact likely to live longer....Optimistic individuals play a disproportionate role in shaping our lives. Their decisions make a difference; they are the inventors, the entrepreneurs, the political and military leaders....(Kahneman, <i>Thinking, Fast and Slow,</i> 255-256). </blockquote>
That may be true for mature adults, but we should not forget that many teenagers are optimists and many of them haven't <a href="http://www.npr.org/templates/story/story.php?storyId=124119468">fully developed frontal lobes, which impairs their ability to make good judgments.</a> The optimism of young people often drives them to engage in risky, dangerous behaviors, especially regarding sex, alcohol, and other drugs. <br />
<br />
Even for adults, however, there are times when too much optimism can be delusional. Optimistic bias can lead to unrealistic evaluation of prospects and overconfidence in taking risks. Kahneman notes that even though only 35% of small businesses survive for five years in the U.S., entrepreneurs rate the chances of success of any business at 60% and the chances of success for their own enterprise at 81%! One-third of entrepreneurs think their chance of failing is zero. While it is certainly important for someone in business to be confident, it is also important to be realistic.<br />
<br />
The optimistic bias commonly gives rise to <a href="http://59ways.blogspot.com/2012/08/illusion-of-skill.html">the illusion of skill</a>. On the other hand, without optimism not too many projects would get off the ground nor would many risks be taken. Still, after one has set one's goals and objectives, collected and studied carefully a set of relevant cases similar to one's own, and developed a plan of action, one should try to debias excessive optimism. How? One way is to force yourself to consider what might go wrong. At least, this was one conclusion of a study by Sara Lichtenstein, Baruch Fischhoff, and Lawrence D. Phillips ("Calibration of probabilities: The state of the art in 1980," in <a href="http://www.amazon.com/exec/obidos/ISBN=0521284147/roberttoddcarrolA/"><i>Judgment under uncertainty: Heuristics and biases</i></a>, eds. Daniel Kahneman, Paul Slovic, and Amos Tversky. Part VI of this book is devoted to the topic of overconfidence.) It may not come naturally, but considering what might cause you to fail just might save you from plowing ahead in <a href="http://en.wikipedia.org/wiki/Waist_Deep_in_the_Big_Muddy">the Big Muddy</a> as you and your acolytes sink into the quicksand.<br />
<br />
Some people make unrealistic risk assessments because they are ignorant: they lack the necessary knowledge to make a realistic judgment. Such people often engage in risky sexual behaviors and develop unhealthy habits. These folks are optimistic about their chances, say, of not getting AIDS from unprotected sex or not getting cancer from smoking. It is possible that their optimism is due to wilful ignorance rather than to an unrealistically benign view of the world around them, though willful ignorance and an overly benign view of the world are not mutually exclusive.<br />
<br />
Many teenage drivers have an unrealistic view of the risks involved in drinking alcohol (or using other drugs) and driving. Many Driver Education classes have used <a href="http://www.amazon.com/Classic-Drunk-Driving-Alcohol-Films/dp/B000IT51WU">DUI Shock Films</a> to scare young drivers. These films are known for their gory realism of DUI car crashes that kill and mutilate people. Is there strong evidence that such films have a significant effect on the optimistic bias of teen drivers that leads to thousands of deaths in DUI car crashes each year? I couldn't find any, and I'm not that optimistic about the evidence being out there.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com12tag:blogger.com,1999:blog-8778506545780706793.post-72771335758443993452012-11-26T06:00:00.000-08:002012-11-26T18:25:34.174-08:00magical thinking<blockquote class="tr_bq">
<div style="text-align: center;">
<span class="quote">...<i>magical thinking is "a
fundamental dimension of a child's thinking</i>." </span>--<a href="http://www.amazon.com/exec/obidos/ISBN=0805805087/roberttoddcarrolA/">Zusne and Jones</a></div>
</blockquote>
<br />
Magical thinking is a belief in the interconnectedness of all things through forces and powers that transcend physical connections. Magical thinking invests special powers and forces in things and sees them as symbols on various levels. According to anthropologist Dr. Phillips Stevens Jr., "the vast majority of the world's peoples ... believe that there are real connections between the symbol and its referent, and that some real and potentially measurable power flows between them." He believes there is a neurobiological basis for this, though the specific content of any symbol is culturally determined. ("<a href="http://www.csicop.org/si/show/magical_thinking_in_complementary_and_alternative_medicine">Magical Thinking in Complementary and Alternative Medicine</a>," Skeptical Inquirer, 2001, November/December.)<br />
<a name='more'></a><br />
<br />
One of the driving principles of magical thinking is the notion that things that resemble each other are causally connected in some way that defies scientific testing (the so-called law of similarity, that like produces like, that effect resembles cause). Another driving principle of magical thinking is the belief that "things that have been either in physical contact or in spatial or temporal association with other things retain a connection after they are separated" (the so-called law of contagion) (<a href="http://www.amazon.com/exec/obidos/ISBN=0684826305/roberttoddcarrolA/">James George</a> <a href="http://www.amazon.com/exec/obidos/ISBN=0684826305/roberttoddcarrolA/">Frazer, <i>The Golden Bough: A Study in Magic and Religion</i></a>; <a href="http://www.csicop.org/si/show/magical_thinking_in_complementary_and_alternative_medicine">Stevens</a>). Think of relics of saints that are supposed to transfer spiritual energy. Think of <a href="http://www.skepdic.com/psychdet.html">psychic detectives</a> claiming that they can get information about a missing person by touching an object that belongs to the person (<a href="http://www.skepdic.com/psychomet.html">psychometry</a>). Or think of the <a href="http://www.skepdic.com/animalquackers.html">pet psychic</a> who claims she can read your dog's mind by looking at a photo of the dog. Or think of Rupert Sheldrake's <a href="http://www.skepdic.com/morphicres.html">morphic resonance</a>, the idea that there are mysterious telepathy-type interconnections between organisms and collective memories within species. (Coincidentally, Sheldrake also studies <a href="http://www.skepdic.com/esp.html">psychic dogs</a>.)<br />
<br />
According to psychologist <a href="http://www.csicop.org/si/show/belief_engine">James Alcock</a>, "'Magical thinking' is the interpreting of two closely occurring events as though one caused the other, without any concern for the causal link. For example, if you believe that crossing your fingers brought you good fortune, you have associated the act of finger-crossing with the subsequent welcome event and imputed a causal link between the two." In this sense, magical thinking is the source of many <a href="http://www.skepdic.com/superstition.html">superstitions</a>. Alcock notes that because of our neurobiological makeup we are prone to magical thinking and that therefore critical thinking is often at a disadvantage. <br />
<br />
Zusne and Jones (<a href="http://www.amazon.com/exec/obidos/ISBN=0805805087/roberttoddcarrolA/"><i>Anomalistic Psychology: A Study of Magical Thinking</i> 2nd edition,</a> 1989, p. 13) define magical thinking as the belief that <br />
<blockquote class="tr_bq">
(a) transfer of energy or information between physical systems may take place solely because of their similarity or contiguity in time and space, or (b) that one's thought, words, or actions can achieve specific physical effects in a manner not governed by the principles of ordinary transmission of energy or information.</blockquote>
Three of the more obvious examples of magical thinking are <a href="http://www.skepdic.com/astrology.html">astrology</a>, Jung's notion of <a href="http://www.skepdic.com/jung.html">synchronicity</a>, and Hahnemann's notion of <a href="http://www.skepdic.com/homeo.html">homeopathy</a>. Other examples would be <a href="http://www.skepdic.com/akinesiology.html">applied kinesiology</a>, <a href="http://www.skepdic.com/graphol.html">graphology</a> (<a href="http://www.pbs.org/safarchive/3_ask/archive/qna/3282_bbeyerstein.html">Beyerstein</a>), <a href="http://www.skepdic.com/palmist.html">palmistry</a>, and <a href="http://www.skepdic.com/kinesis.html">psychokinesis</a>. <br />
<br />
Other sciences have led us away from superstition and magical thinking; <a href="http://www.skepdic.com/parapsy.html">parapsychology</a>, on the other hand, tries to lead us into it. <a href="http://www.amazon.com/exec/obidos/ISBN=0062515020/roberttoddcarrolA/">Dean Radin</a>, a foremost apologist for parapsychology, notes that "the concept that mind is primary over matter is deeply rooted in Eastern philosophy and ancient beliefs about magic." However, instead of saying that it is now time to move forward and give up the magical thinking of childhood, he rebuffs "Western science" for rejecting such beliefs as "mere superstition." Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com19tag:blogger.com,1999:blog-8778506545780706793.post-16807310420582866202012-11-19T06:50:00.000-08:002012-11-20T21:06:55.267-08:00causal fallaciesA causal fallacy involves making the claim that something (call it 'x') causes something else (call it 'y') when the evidence presented is insufficient to establish either that x is a necessary condition for y or that x is a sufficient condition for y. Causal fallacies usually involve either post hoc reasoning or jumping to a conclusion based on finding a correlation between x and y. The <a href="http://59ways.blogspot.com/2012/06/post-hoc-fallacy.html">post hoc fallacy</a> (that x causes y because x came before y) is discussed elsewhere. Here I'll focus on errors due to misapplied correlation.<br />
<br />
Let's start with an example of some good causal reasoning. The claim that smoking causes lung cancer is based on data that demonstrate to a high degree of probability that had a person with lung cancer not smoked they would not have the kind of cancer they have today. We describe such a relationship as that of smoking to lung cancer as being of the type <i>smoking is a necessary condition for lung cancer</i>. Expressing it this way can be misleading though, so some explanation if required. Certainly, people who have never smoked can get lung cancer, so in that sense it is not necessary to smoke in order to get lung cancer. When we say that smoking is a necessary condition for lung cancer we mean that for the particular cancer a person has smoking was necessary. In simpler terms, this means that had the person not smoked they would not have gotten the kind of lung cancer that is caused by smoking.<br />
<a name='more'></a><br />
<br />
How would one go about showing that smoking causes cancer, i.e., is a necessary condition for cancer? You begin by making predictions and testing them. For example, if smoking causes cancer then you would expect that if you randomly selected 100,000 people, divided them into smokers and non-smokers and observed them over a 20-year period, you should find a significantly greater number of lung cancer cases in the smoking group. Significance is measured by a statistical formula that basically says that it is highly unlikely that the difference between the two groups is due to chance. Many more predictions should be tested before jumping to the conclusion that smoking causes lung cancer. These predictions might take into account how long people have smoked; how many cigarettes a day they smoked; if they quit smoking, how long has it been; etc.<br />
<br />
Establishing causality requires more than just finding a statistic that is consistent with the hypothesis that x causes y, however. For example, you might find that 30 out of 30 men in a West Virginia lung cancer ward all worked in the coal mines. From that you might infer that coal dust causes lung cancer. But what if you also find out that all 30 were smokers? That makes the issue a bit more complicated. Perhaps coal dust contributes to lung cancer or perhaps smoking alone can account for these cancers. Further predictions would have to be tested to try to tease out the role, if any, of coal dust in lung cancer.<br />
<br />
A correlation between x and y must exist if x and y are causally related. That is, it must be the case that x is generally followed by y or that y is generally preceded by x, that as x increases or decreases so does y, or that as x increases, y decreases. Finding a correlation between x and y, however, does not mean they are causally related. There are very good correlations between age and shoe size, hat size, height, and weight, but age doesn't cause any of these things.<br />
<br />
One problem with claiming a causal relationship based solely on a strong correlation is that in addition to a causal connection there are at least three other plausible explanations for the correlation. One, it could be a fluke. Two, there might be a causal relationship, but the correlation can't tell you which is the cause and which the effect. For example, you might find a good correlation between the increase in sex education classes for high school students and an increase in teenage pregnancies. But you have no way of knowing just from the correlation whether this is a coincidence, whether the sex ed classes stimulated interest in sexual activity leading to increased pregnancies, or whether the increase in pregnancies prompted school officials to add more sex education classes.<br />
<br />
Three, there might be a causal relation involved but not <i>x causes y</i> or <i>y causes x</i>. Rather, it might be the case that <i>z causes both x and y,</i> or that <i>z and x cause y </i>(or <i>z and y cause x</i>). For example, there might be a good correlation between taking birth control pills and developing blood clots in women of a certain age group, but when controlled for smoking the correlation between taking birth control pills and developing blood clots might go away. In a study on this issue, it was found that smoking plus taking birth control pills increased the chances of blood clots more than just smoking did, while those who did not smoke but took the pill had no greater frequency of blood clots that women in general (Ronald Giere, <a href="http://www.amazon.com/exec/obidos/ISBN=015506326X/roberttoddcarrolA/"><i>Understanding Scientific Reasoning</i></a>, [1996] 297-303).<br />
<br />
On the other hand, if x and y are causally related there should be a good correlation between them, which allows us to make predictions that test the hypothesis <i>x causes y</i>. For example, if cell phone use causes brain tumors then we should find a significantly greater number of brain tumors among cell phone users compared to those who don't use cell phones. No data yet has supported this claim, which strongly indicates that cell phone use isn't a significant causal factor for brain tumors. However, had we found a strong correlation between cell phone use and brain tumors, that would <i>confirm </i>our hypothesis but it would not be enough evidence to establish a causal connection. Before jumping to the conclusion that cell phones cause brain tumors we would need to do two things: one, we must rule out other plausible causes for the correlation and two, we would need to make more testable predictions that ideally would be more rigorous than the original test.<br />
<br />
Many people consider the <a href="http://www.skepdic.com/control.html">randomized control group study</a> to be the gold standard in science, especially in medicine. But since there are many variables that can affect the outcome of a controlled study, it is important that we not put too much faith in a single study, especially if the study is small and involves multiple outcomes. For example, the <a href="http://www.skepdic.com/sichertarg.html">Sicher-Targ distant healing study</a> involved only 40 subjects and looked at 23 possible outcomes for AIDS patients, some of whom were prayed for and some of whom weren't prayed for by a special group of praying people. Finding a significant correlation between a few of the 23 possible outcomes, even for a small experimental group of 20 subjects, is expected by the laws of chance and should not be taken to indicate any kind of causal relationship between the praying and the few outcomes that correlated significantly. <br />
<br />
Probably the most common causal error is to conclude that because test results confirm a hypothesis, the hypothesis is established to a significant degree of probability. As long as other plausible explanations can't be ruled out, finding that an experimental result is just what you predicted if your hypothesis is correct means only that your hypothesis can't be ruled out. A prime example of this kind of faulty reasoning permeates more than a century of experimentation by parapsychologists in their quest to prove ESP and psychokinesis. I have discussed this elsewhere under the heading of <i><a href="http://www.skepdic.com/psiassumption.html">the psi assumption</a>.</i><br />
<br />
Briefly, the <a href="http://www.skepdic.com/psi.html">psi</a> assumption is the assumption that any significant departure from the laws of chance in a test of <a href="http://www.skepdic.com/psychic.html">psychic</a> ability is evidence that something <a href="http://www.skepdic.com/anomaly.html">anomalous</a> or <a href="http://www.skepdic.com/paranormal.html">paranormal</a> has occurred. Departure from the laws of chance would be <i>consistent</i> with the psi hypothesis, but until all other plausible explanations have been ruled out, it is hasty to conclude that evidence for psi has been found. There are several plausible explanations for the data in psi experiments. Cheating by subjects is commonplace. Fraud by experimenters is rare, but it has happened (e.g., the <a href="http://skepdic.com/soalgoldney.html">Soal-Goldney experiment</a> [1941-1943]). Methodological errors and sloppiness have occurred in experiments that have been hailed as slam-dunk proof by parapsychologists like <a href="http://www.skepdic.com/refuge/radin1.html">Dean Radin</a>. For example, Susan Blackmore was appalled when she visited the lab of Carl Sargent, whose work played a major role in the <a href="http://www.skepdic.com/ganzfeld.html">gan</a><a href="http://www.skepdic.com/ganzfeld.html">zfeld s</a><a href="http://www.skepdic.com/ganzfeld.html">tudies of Bem and Honorton</a>.<br />
<br />
<blockquote class="tr_bq">
....I went to visit Sargent's laboratory in Cambridge where some of the best ganzfeld results were then being obtained. Note that in Honorton's database nine of the twenty-eight experiments came from Sargent's lab. What I found there had a profound effect on my confidence in the whole field and in published claims of successful experiments.</blockquote>
<blockquote class="tr_bq">
These experiments, which looked so beautifully designed in print, were in fact open to fraud or error in several ways, and indeed I detected several errors and failures to follow the protocol while I was there. I concluded that the published papers gave an unfair impression of the experiments and that the results could not be relied upon as evidence for psi. (<a href="http://www.susanblackmore.co.uk/Articles/JSPR%201987.htm">Blackmore 1987</a>)</blockquote>
<br />
Other errors, such a <a href="http://www.skepdic.com/sensoryleakage.html">sensory leakage</a> and <a href="http://www.skepdic.com/experimentereffect.html">experimenter effects</a>, questionable methodologies such as <a href="http://www.skepdic.com/displacement.html">displacement</a> and <a href="http://www.skepdic.com/psimiss.html">psi missing</a>, and misapplication of statistics must all be considered before jumping to the conclusion that a statistic that is unlikely due to chance according to some arbitrary formula is proof of anything paranormal. <br />
<br />
Another area where it is common to mistake correlation for causation is in medicine. Two examples stand out<span style="font-size: small;"><span style="font-family: Arial;">:</span></span> both involve using MRIs. When the MRI became widely available in the late 1980s, doctors began using them on patients complaining of severe back pain. The MRIs showed many things, including spinal discs that appeared degenerated in people with severe back pain. Doctors concluded that the pain was being caused by the abnormalities in the discs. Prior to the use of the MRI, the most common treatment for back pain was no treatment at all. Most back pain goes away on its own. After the introduction of the MRI, various kinds of surgical procedures were done to alleviate the pain assumed to be caused by the bulging or herniated disc. Before jumping to the conclusion that the abnormal discs were causing the back pain, scientists should have done MRIs on people who don't have any back pain. This was finally done in 1994 on ninety-eight people. They went to the doctor, got an MRI, and two-thirds of them had abnormalities in their discs. Maybe the bulging discs had nothing to do with the pain (Lehrer, Jonah. <a href="http://www.amazon.com/exec/obidos/ISBN=0547247990/roberttoddcarrolA/">How We Decide</a><a href="http://www.amazon.com/exec/obidos/ISBN=0547247990/roberttoddcarrolA/">, pp. 160-163</a>). MRIs, it turns out, "find abnormalities in everybody" (<a href="http://www.amazon.com/exec/obidos/ISBN=0547053649/roberttoddcarrolA/">How Doctors Think</a>, Jerome Groopman).<br />
<br />
The other example involves the overuse of MRIs for injured athletes. Baseball pitchers with injured arms, for example, are often advised to have surgery based on an MRI that finds abnormalities. <a href="http://www.nytimes.com/2011/10/29/health/mris-often-overused-often-mislead-doctors-warn.html?pagewanted=1&_r=1">Dr. James Andrews</a>, a widely known sports medicine orthopedist in Gulf Breeze, Fla., wanted to test his suspicion that MRIs might be misleading. He scanned the shoulders of 31 perfectly healthy professional baseball pitchers. The pitchers were not injured and had no pain. But the MRIs found abnormal shoulder cartilage in 90 percent of them and abnormal rotator cuff tendons in 87 percent. This finding strongly indicates that the abnormalities seen in the MRIs were not causing the pitchers' pain.<br />
<br />
Epidemiologists are particularly prone to making hasty judgments about causal connections based on correlations. An epidemiologist describes the problem:<br />
<br />
<blockquote class="tr_bq">
...there is an ongoing national study of a chemical in plastic that is in EVERYTHING and EVERYONE. It is ubiquitous and it is a chemical, so something bad may be happening to us. So epidemiologists are asking what diseases might be associated with this chemical. And true to form, they are looking at every disease possible…exploring hundreds of diseases known to man to see if there is a statistical link to the chemical. Even before they start, we know they will find one or many links. If one disease is found that appears to be linked and the calculated statistic of p < or = to 0.05 is present, then the press release goes out that the chemical causes some horrible cancer or some toes to fall off. Everyone panics and goes to REI to buy chemical free plastic water bottles. Of course, the call will be for further studies. Those studies show the findings can’t be replicated. But those follow-up negative studies don’t get published and only make page 15 in the newspaper rather than page one. AND the damage is already done as the plastic industry tanks, the toe falling off attorneys are lining up and everyone starts limping for effect. ("<a href="http://www.askguilfordhealth.com/?p=475">Problems in Epidemiology land" by Dr. Ward Robinson</a>)</blockquote>
<br />
Finally, there are some parapsychologists who have found correlations between what they consider to be important events that catch the attention of many people (such as the death of a princess) and blips on random number generators (RNGs). These folks call their work the <a href="http://www.skepdic.com/globalconsciousness.html">global consciousness project</a> and they are led by people like Dean Radin and Roger Nelson. Radin and Nelson think that when groups of people focus their minds on the same thing, they influence “the world at large” and this influence is shown by blips on RNG machines. In addition to basing a causal connection on nothing but a correlation, these fellows are guilty of <a href="http://59ways.blogspot.com/2012/06/selection-bias.html">selection bias</a>, a problem that plagues epidemiologists as well.<br />
<span style="font-family: Arial; font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"> </span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span>Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com19tag:blogger.com,1999:blog-8778506545780706793.post-68400828764734019402012-11-12T06:00:00.000-08:002012-11-14T17:05:51.638-08:00negativity bias<blockquote class="tr_bq">
"<i><span style="color: #0b5394;">The evil that men do lives after them; The good is oft interred with their bones</span>.</i>" --Marc Antony, <i>Julius Caesar</i> by Shakespeare (act 3, scene ii)</blockquote>
<blockquote class="tr_bq">
<i>"<span style="color: #0b5394;">I hate losing more than I love winning</span></i>." --Billy Beane:</blockquote>
<div style="text-align: center;">
__________</div>
<div style="text-align: center;">
</div>
<blockquote class="tr_bq">
Brief contact with a cockroach will usually render a delicious meal inedible. The inverse phenomenon—rendering a pile of cockroaches on a platter edible by contact with one’s favorite food—is unheard of." So begins a classic paper by Paul Rozin and Edward B. Royzman: "<a href="http://dionysus.psych.wisc.edu/lit/articles/RozinP2001a.pdf">Negativity Bias, Negativity Dominance, and Contagion</a> (2001).</blockquote>
<br />
You might think you're weird when a thousand good things happen but you focus on the one bad thing. You're not. That's the way our brains are hardwired. We're designed by nature to pay more attention and react more quickly and more strongly to negative than to positive news. One salient misdeed by a person will often outweigh years of good works. Years of building up a positive image can be destroyed in an instant by a single misstep. This tendency to give more weight to the negative is called <i>negativity bias </i>and is defined as <a href="http://www.bsos.umd.edu/psyc/woodward/lab/AW_Pubs/VaishGrossmanWoodward08.pdf">"the propensity to attend to, learn from, and use negative information far more than positive information."</a> Our brain evolved to react more quickly to fear than to hope, to respond to a threat more quickly and more intensely than to an opportunity for pleasure. And this trait has carried over into modern times in ways that are not always beneficial.<br />
<a name='more'></a><br />
A friend of mine is a headhunter for executives in the field of education. His latest job ended with the Board of a college split between two candidates for the position of president. Their solution? Each board member would call someone who works with one of the candidates and ask the one called questions about the candidate. The Board also plans to ask the former president of the college for his opinion on the finalists. It is very likely that one negative comment about either candidate will outweigh several positive comments and that that one negative comment, even if just a personal opinion or not true (it is unlikely any investigation will be made to verify whatever the Board members are told over the telephone), will probably doom one candidate or the other.<br />
<br />
Daniel Kahneman writes:<br />
<blockquote class="tr_bq">
The brains of humans and other animals contain a mechanism that is designed to give priority to bad news. By shaving a few hundredths of a second from the time needed to detect a predator, this circuit improves the animal’s odds of living long enough to reproduce. (<i><a href="http://www.amazon.com/exec/obidos/ISBN=0374275637/roberttoddcarrolA/">Thinking, Fast and Slow</a></i>, p. 301.)</blockquote>
However, negativity bias makes us vulnerable to manipulation by those who would play on our fears. For example, National Security Advisor Condoleezza Rice may not have had any evidence that Saddam Hussein had weapons of mass destruction, but she could put the fear of god into many people just by warning us that the <a href="http://www.huffingtonpost.com/rick-hanson-phd/be-mindful-not-intimidate_b_753646.html">"smoking gun of evidence for WMDs in Iraq could come in the form of a mushroom cloud." </a><br />
<br />
<i>Loss aversion </i>is another way that negativity bias manifests itself. Paul Rozin writes in "Bad is Stronger Than Good":<br />
<blockquote class="tr_bq">
Bad emotions, bad parents, and bad feedback have more impact than good ones, and bad information is processed more thoroughly than good. The self is more motivated to avoid bad self-definitions than to pursue good ones. Bad impressions and bad stereotypes are quicker to form and more resistant to disconfirmation than good ones. (Quoted in Daniel Kahneman's <i>Thinking, Fast and Slow, </i>p. 302.)</blockquote>
Potential losses affect us more deeply than potential gains. This can lead to some irrational behavior, as is evidenced by those many times that we pass up an opportunity to benefit either financially or psychologically because we are afraid to take the risk of a loss. Long term investors who put large chunks of their portfolio in bonds are a prime example. Over the past 80 years, stocks have provided a 6.5% return (adjusted for inflation), while bonds have returned 0.5% (Lehrer, Jonah. <i><a href="http://www.amazon.com/exec/obidos/ISBN=0547247990/roberttoddcarrolA/">How We Decide</a></i><a href="http://www.amazon.com/exec/obidos/ISBN=0547247990/roberttoddcarrolA/">, p. 77</a>)<i>.</i> Bonds are considered a safer investment because there is a greater chance of losing money in stocks. For many people, the chance of losing money has a larger effect on their decision making than the chance of gaining money by investing in what may be riskier in the short run but more profitable in the long run. Most people, if offered a chance to win $150 or lose $100 on a coin toss won't take the deal. The potential loss, though less than the potential gain by a substantial amount, isn't worth the risk.<br />
<br />
Loss aversion may explain why <a href="http://plato.stanford.edu/entries/pascal-wager/">Pascal's wager</a> seems reasonable to many people. People don't want to take the risk of losing eternal life by not believing in the god of Abraham. The safe bet is to believe. The 17th century mathematician argued that it would be wise to believe in the god of Abraham because you risk eternal life by not believing and if this god doesn't exist, you lose nothing in comparison to eternal life. If eternal life with this god is not attractive to you, then the potential loss by not believing isn't likely to affect you much. Also, if you have no fear of hell (i.e., eternal suffering of some sort) because you consider its actual existence to be near zero in probability, then it is unlikely that loss aversion will drive you to believe in this god, even though your wager is just your life, i.e., you must act as if this god exists. On the other hand, the general principle behind the wager seems sensible: only a fool wouldn't wager next to nothing when the prize, if you win, is of infinite value. You might not bet $100 for a chance to win $150 on a coin toss, but you would be a fool not to bet $1 on a chance to win, say, $1,000,000 on a coin toss.<br />
<br />
One effect of negativity bias is that we are likely to give more credence and more weight to negative claims about positions or candidates that we oppose than we are to positive claims about them. We are likely to not be very critical in our examination of such negative claims, certainly not as critical as when negative claims are made against views we cherish.<br />
<br />
Another effect of negativity bias is that we are likely to be afraid of things disproportionately to the evidence, e.g., most people who are afraid of flying in airplanes have little fear of driving in an automobile even though their chance of being killed in an automobile crash is much higher than their chance of being killed in an airplane crash.<br />
<br />
Negativity bias manifests itself in various ways involving contagion. For example, a person of the Brahmin (priestly) caste can be sullied by contact with a member of the Shudra (servant) class, but Shudras are not purified or elevated in status by contact with the Brahmins. <a href="http://dionysus.psych.wisc.edu/lit/articles/RozinP2001a.pdf">Rozin and Royzman</a> write:<br />
<blockquote class="tr_bq">
The contamination often occurs by eating food prepared by a lower caste. On the other hand, when people of lower castes consume foods prepared by higher castes, there is no corresponding elevation in their status. Stevenson summarized this feature of the caste system with the phrase “pollution always overcomes purity” ("Status evaluation in the Hindu caste system." <i>Journal of the Royal Anthropological Institute of Great Britain and Ireland</i>, 84, p. 50).</blockquote>
On the other hand, an argument has been made for a <i>positivity bias</i> in <i><a href="http://www.amazon.com/exec/obidos/ISBN=0870738178/roberttoddcarrolA/">The Pollyanna Principle: Selectivity in Language, Memory, and Thought</a> </i>by Margaret Matlin and David Stang (1978). Matlin and Stang claimed that their research showed that people are more likely to expose themselves to positive stimuli than they are to avoid
negative stimuli and that they encounter more positive stimuli than negative stimuli,<a href="http://en.wikipedia.org/wiki/Pollyanna_principle">*</a> which seems intuitively what you'd expect from us pleasure-seeking animals.<br />
<br />
If you are familiar with the <a href="http://www.skepdic.com/forer.html">Forer effect</a>, you know that people tend to agree with positive statements made about themselves (whether they're true or not) and these kinds of statements are more likely to be accepted than negative statements about themselves. Studies on <a href="http://59ways.blogspot.com/2012/08/self-deception.html">self-deception</a> consistently find most people overestimate their possession of positive traits. So, when it comes to evaluating oneself, the negativity bias seems to be overpowered by the positivity bias.<br />
<br />
When it comes to politics, however, negativity bias reigns supreme. The overwhelming appeal in ads for candidates or ballot propositions is to arouse negative feelings about an opponent or a position on an issue. After the last national election, there were several letters to the editor of my local fishwrap decrying the negativity of Republicans, who did much more poorly than they had expected in the election that saw Barack Obama re-elected. (Some cynics might say that Obama beat Romney because Romney aroused slightly more fear in a certain segment of the electorate than Obama did.) One wrote that negativity "dominates the Republican attitude of the 21st century. Everything is bad, everything government touches is poison--there's no positive policy regarding the economy, women's rights, minority rights, environmental issues, etc." On the same day in an op ed, Paul Krugman bemoaned the Republican method of using threats as their main negotiating tool regarding the economy: "They're threatening to block any deal on anything unless they get their way." Some even call the Republican Party the "party of 'No'." Whether these criticisms are true or not, negativity bias is, and probably always will be, the lifeblood of politics.<br />
<br />
Finally, I recall my local fishwrap responding to a reader request that it print more good news by agreeing to set aside a segment of the paper for the "Good News" once a week. The only catch was that the readers were given the responsibility of finding the good news and reporting it to the <i>Saramento Bee</i> editors. The column never got off the ground.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com8tag:blogger.com,1999:blog-8778506545780706793.post-36034865950810883242012-11-05T06:00:00.000-08:002012-11-06T13:58:58.908-08:00informal fallacies of reasoningLogical fallacies are errors that occur in arguments. In logic, an argument is the giving of reasons (called premises) to support some claim (called the conclusion). Arguments may be classified as <i>deductive </i>or <i>inductive</i>. Deductive arguments assert or imply that the conclusion follows <i>necessarily </i>from the premises. Inductive arguments assert or imply that the conclusion follows with <i>some degree of probability</i>, not necessity. Deductive arguments are evaluated for validity. If the conclusion of a deductive argument follows necessarily from the premises, the argument is said to be <i>valid</i>. If the conclusion of a deductive argument does <i>not </i>follow with necessity from the premises, the argument is said to be <i>invalid.</i> Validity is determined by the <i>form </i>of the argument, not the truth or falsity of the premises or the conclusion. An argument with the form <i>If p then q; p; so, q </i>is a valid argument, no matter what statements are represented by <i>p</i> and <i>q</i>. If <i>p</i> and <i>if p then q</i> are both true, then <i>q </i>must be true. An argument with the form <i>If p then q; q; so p</i> is invalid no matter what statements are represented by <i>p</i> and <i>q</i>. Even if <i>q</i> and <i>if p then q</i> are true, <i>p</i> is not necessarily true. (Note: to say a statement is <i>not necessarily true</i> is not the same as saying that it is false.) The invalid argument form just presented is called <a href="http://www.skepdic.com/affirmingtheconsequent.html"><i>affirming the consequent</i></a> and is known as a <i>formal fallacy</i>. Inductive arguments may be evaluated by their form, but usually they are evaluated by other criteria. The fallacies of induction are called <i>informal</i> fallacies because they do not evaluate the form to determine validity. I'll go over the criteria for a cogent inductive argument as I discuss the informal fallacies below.<br />
<a name='more'></a><br />
<br />
There are many ways to classify informal logical fallacies. I prefer listing the conditions for a good or cogent argument and then classifying logical fallacies according to the failure to meet these conditions. <br />
<br />
<div style="text-align: center;">
<b>fallacies of assumption</b></div>
<br />
Every argument makes some assumptions. A cogent argument makes only <i>warranted</i> assumptions, i.e., its assumptions are not questionable or false. <i>Fallacies of assumption</i> make up one type of logical fallacy. One of the most common fallacies of assumption is called <a href="http://www.skepdic.com/begging.html">begging the question</a>. Here the arguer assumes what he claims to be proving. Most arguments for <a href="http://www.skepdic.com/psi.html">psi</a> commit this fallacy. For example, many believers in psi point to the <a href="http://www.skepdic.com/ganzfeld.html"> ganzfeld experiments</a> as proof of paranormal activity. They note that a .25 success rate is predicted by chance but Honorton had some success rates of .34. One defender of psi claims that the odds of getting 34% correct in these experiments was a million billion to one. That may be true but one is begging the question to ascribe the amazing statistic to paranormal powers. It could be evidence of psychic activity but there might be some other explanation as well. The amazing statistic doesn't prove what caused it. The fact that the experimenter is trying to find proof of psi isn't relevant. If someone else did the same experiment but claimed to be trying to find proof that angels, dark matter, or aliens were communicating directly to some minds, that would not be relevant to what was actually the cause of the amazing statistic. The experimenters are simply assuming that any amazing stat they get is due to something paranormal.<br />
<br />
Not all fallacies of assumption are fatal. Some cogent arguments might make one or two questionable or false assumptions, but still have enough good evidence to support their conclusions. Some, like <a href="http://www.skepdic.com/gamblers.html">the gambler's fallacy</a>, are fatal, however. The gambler's fallacy is the false assumption that the odds for something with a fixed probability increase or decrease depending on recent occurrences. If black has come up four times in a row, some people will bet on red because they think it more likely that red will come up than black on the next roll. It isn't. The odds never change for red or black. (They're always a little under 50/50 because of the two green numbers, 0 and 00.)<br />
<br />
<div style="text-align: center;">
<b>fallacies of relevance</b></div>
<br />
Another quality of a cogent argument is that the premises are relevant to supporting their conclusions. Providing irrelevant reasons for your conclusion need not be fatal, however, provided you have sufficient relevant evidence to support your conclusion. However, if all the reasons you give to support your conclusion are irrelevant then your reasoning is said to be a <a href="http://www.skepdic.com/nonsequitur.html">non sequitur</a>. For example, <i>poor women can't afford abortions, so the government should pay for them</i> is a non sequitur. It is true that a poor woman can't afford an abortion but it doesn't follow from that fact that the government or anyone else should pay for it. Poor men can't afford a new car, but it doesn't follow from that that the government ought to provide them with a new car. The poverty of women or men would become relevant if first it were established that everybody has a right to an abortion or a new car and that the government must make sure they get anything they have a right to, if they request it.<br />
<br />
The <i><a href="http://www.skepdic.com/dvinefal.html">divine fallacy</a></i> or the <i>argument from incredulity </i>is a type of <i><a href="http://www.skepdic.com/ignorance.html">argument to ignorance</a></i>. If others can't disprove a claim, that is irrelevant to its truth. Arguments from ignorance put forth the irrelevant fact that something can't be done or proved, so some other claim must be true. The truth of any claim depends on the evidence in support of it; claims that the evidence doesn't prove a claim is true is irrelevant to whether some other claim is true. Here are a few examples of the divine fallacy. <i>I can't figure this out, so a god must have done it</i>. Or, <i>This is amazing; therefore, a god did it</i>. Or, <i>I can't think of any other explanation; therefore, a god did it</i>. Or, <i>This is just too weird; so, a god is behind it</i>.<br />
<br />
One of the more common fallacies of relevance is the <a href="http://59ways.blogspot.com/2012/01/ad-hominem.html">ad hominem</a>, an attack on the one making the argument rather than an attack on the argument. One of the most frequent types of ad hominem attack is to attack the person's motives rather than his evidence or his reasoning. For example, when an opponent refuses to agree with some point that is essential to your argument, you call him an "antitheist" or "obtuse." Personal characteristics or motives are irrelevant to whether a person's premises adequately support her conclusions. Good people sometimes make bad arguments and bad people sometimes make good arguments. People with good motives can make bad arguments and people with evil motives can make good arguments.<br />
<br />
Other examples of irrelevant reasoning are the <a href="http://www.skepdic.com/adpopulum.html"> ad populum fallacy</a>, the <a href="http://www.skepdic.com/tradition.html">irrelevant appeal to tradition</a>, and the <a href="http://www.skepdic.com/authorty.html">irrelevant appeal to authority</a>. The popularity or longevity of a belief is irrelevant to its truth. The integrity and expertise or authority of those holding a belief are irrelevant to its truth.<br />
<br />
<div style="text-align: center;">
<b>fallacies of omission</b></div>
<br />
A third quality of a cogent argument is sometimes called the completeness requirement: A cogent argument should include <i>all </i>the relevant evidence. In real life, it is often impossible to know all the relevant evidence, so we should strive not to omit any relevant evidence that we are aware of and we should try to discover as much relevant evidence as the argument deserves. We need to be much more diligent about satisfying the completeness requirement when dealing with, say, a criminal trial than when dealing with a decision as to what car to buy or what color pencil should be used for a school project. There is a natural tendency, however, to be selective in our search for evidence. The <a href="http://59ways.blogspot.com/2012/08/confirmation-bias.html">confirmation bias</a> may drive us to seek only evidence that supports what we already believe or want to believe.<br />
<br />
<a href="http://www.skepdic.com/selectiv.html">Selective thinking</a> is the basis for most beliefs in the <a href="http://www.skepdic.com/psychic.html">psychic</a> powers of so-called <a href="http://www.skepdic.com/mentalist.html">mind readers</a> and <a href="http://www.skepdic.com/medium.html">mediums</a>. It is also the basis for many, if not most, <a href="http://www.skepdic.com/occult.html">occult</a> and <a href="http://www.skepdic.com/pseudosc.html">pseudoscientific</a> beliefs. Selective thinking is essential to the arguments of defenders of untested and unproven remedies. Suppressing or omitting relevant evidence is obviously not fatal to the <i>persuasiveness </i>of an argument, but it is fatal to its <i>cogency</i>. The <a href="http://59ways.blogspot.com/2012/08/regressive-fallacy.html">regressive fallacy</a> is an example of a fallacy of omission. The regressive fallacy is the failure to
take into account natural and inevitable fluctuations when ascribing
causes to events. The <a href="http://www.skepdic.com/falsedilemma.html">false dilemma</a> (or <a href="http://www.skepdic.com/refuge/ctlessons/lesson11.html">false dichotomy</a>), whereby one restricts consideration of reasonable alternatives, is also a fallacy of omission. Sometimes this fallacy is called <i>the black or white fallacy </i>or<i> the either-or fallacy</i>: one poses what looks like a true
dilemma--I must pick one or the other--when, in fact, there are other
viable alternatives. <br />
<br />
While at TAM5, the James Randi Educational Foundation's annual reasonfest, I was approached by <a href="http://www.andrekole.org/">André Kole</a>, who introduced himself as a magician and longtime friend of Randi's. I was there as part of workshop on critical thinking. Kole asked me if I would read a short pamphlet he'd written and give him my opinion of his arguments. I looked at the title of his pamphlet and told him I'd read it but I could see that the main problem is that he's arguing a false dichotomy. The title of his tract is <i>Jesus: Magician or God?</i> (Kole is a "Christian magician" who does "faith-based illusions."<a href="http://www.andrekoleministry.com/bio.htm">*</a>) I told him without reading his tract that there were other possibilities like madman, fraud, and myth. My own view is that the character described in the four gospels accepted as "authentic" by most Christians is a mythical character. A man named Jesus existed but the stories about his miracles are either exaggerations or distortions of actual, non-miraculous events or confabulations that incorporated myths from other traditions (like the Mithraic tradition). He may have been a <a href="http://skepdic.com/faithhealing.html">faith healer</a> like Benny Hinn or Peter Popov.<br />
<br />
I read the tract and told Kole that he did a good job in arguing that Jesus was not a magician, but that it didn't follow that just because he wasn't a magician he was therefore a god. I really had no interest in arguing with Kole about the Bible or the alleged miracles, but he asked me what I thought of his argument and I told him. If he wants to prove Jesus was a god, he has to do more than prove that he wasn't a magician.<br />
<br />
Kole was not satisfied with my appraisal and asked me to explain how the Bible could be so accurate about some range of prophecies he rattled off that I'd never heard of. His view is that the Bible makes many claims that he doesn't believe can be explained except by accepting that they came from a god. This is begging the question and a variant of <i>the divine fallacy</i>: since I can't see any other way of explaining this, a god must have done it. <br />
<br />
<div style="text-align: center;">
<b>fallacies of unfairness or distortion</b> </div>
<br />
A fourth quality of a cogent argument is fairness. A cogent argument doesn't distort evidence nor does it exaggerate or undervalue the strength of specific data. The <a href="http://www.skepdic.com/refuge/ctlessons/lesson9.html">straw man fallacy</a> violates the principle of fairness. In a straw man argument, one attacks a distorted version of another person's argument. Anyone using a straw man argument is refuting a position of his own creation, not the position of someone else. The refutation, however, may appear to be a good one to someone unfamiliar with the original argument. <br />
<br />
One of the most frequent fallacies of reasoning occurs by giving improper weight to evidence. While all relevant evidence to a conclusion should be considered, each piece of evidence is not equal in significance to every other piece. One should not elevate a relatively minor piece of evidence to the status of linchpin in an argument, nor should one treat stronger evidence that is contrary to your view as if it were minor. Each piece of evidence has to be properly weighted and then the overall weight of the evidence must be evaluated.<span style="font-family: Arial; font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span><br />
<br />
<div style="text-align: center;">
<span style="font-family: Arial; font-size: small;"><b><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;"><span style="font-size: small;">fallacies of <span style="font-size: small;">ambiguity</span> </span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></b></span> </div>
<br />
A fifth quality of cogent reasoning is clarity. Some fallacies are due to ambiguity, such as the fallacy of <i>equivocation</i>: shifting the meaning of a key expression in an argument. For example, the following argument uses 'accident' first in the sense of 'not created' and then in the sense of 'chance event.'<br />
<br />
<blockquote class="tr_bq">
Since you don't believe you were created by a god then you must believe you are just an accident. Therefore, all your thoughts and actions are accidents, including your disbelief in any god. </blockquote>
<div style="text-align: center;">
<b>fallacies of insufficient evidence</b> </div>
<br />
Finally, a cogent argument provides a sufficient amount of evidence to support its conclusion to whatever degree of probability is asserted. Failure to provide sufficient evidence is to commit the fallacy of <i>hasty conclusion</i>. One type of hasty conclusion that occurs quite frequently in the production of superstitious beliefs and beliefs in the paranormal is the <a href="http://www.skepdic.com/posthoc.html">post hoc fallacy</a>, the notion that because one thing happened after another the first must have caused the second. You need more evidence to prove precognition than that your aunt Sady died the night after you had a dream about her dying. Another causal fallacy argues from the fact that two variables are correlated, they are causally related. You need more evidence than a strong correlation to provide adequate evidence for causality. <i>Correlation doesn't prove causality</i>, as the saying goes.<br />
<br />
A clear sign that you arguing for the wrong position is when you provide nothing but truthful, relevant evidence, omit nothing relevant from consideration, and still don't have enough evidence to prove your point. <br />
<br />
<div style="text-align: center;">
<b>classifications are not mutually exclusive</b></div>
<br />
Some fallacies may be classified in more than one way, e.g., <a href="http://www.skepdic.com/pragmatic.html">the pragmatic fallacy</a>, which at times seems to be due to vagueness and at times due to insufficient evidence. The pragmatic fallacy is committed when one argues that something is true because it works<span style="font-size: small;"><span style="font-family: Arial;">,</span></span> where 'works' means something like "I'm satisfied with it," "I feel better," "I find it beneficial, meaningful, or significant," or "It explains things for me." For example, many people claim that <a href="http://www.skepdic.com/astrolgy.html">astrology</a> works, <a href="http://www.skepdic.com/acupuncture.html">acupuncture</a> works, <a href="http://www.skepdic.com/chiro.html">chiropractic</a> works, <a href="http://www.skepdic.com/homeo.html">homeopathy</a> works, <a href="http://www.skepdic.com/numology.html">numerology</a> works, <a href="http://www.skepdic.com/palmist.html">palmistry</a> works, <a href="http://www.skepdic.com/tt.html">therapeutic touch</a> works. What 'works' means here is vague and ambiguous. You could also criticize such arguments for not supplying enough evidence that they work in the sense of being accurately predictive or medically efficacious. You could also criticize such arguments for posing a false dichotomy by ignoring other plausible explanations for the effects observed. Finally, you could also criticize such arguments for their selective use of evidence and ignoring all the anecdotes where the given therapy did not work in any sense of the word or where the prediction could have been satisfied by a gazillion scenarios.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com9tag:blogger.com,1999:blog-8778506545780706793.post-55411768417386279462012-10-29T05:23:00.001-07:002012-10-29T05:27:43.009-07:00ideomotor effectThe ideomotor effect refers to the influence of suggestion or expectation on involuntary and unconscious motor behavior. The movement of pointers on <a href="http://www.skepdic.com/ouija.html"> Ouija boards</a>, of a facilitator's hands in <a href="http://www.skepdic.com/facilcom.html">facilitated communication</a>, of hands and arms in <a href="http://www.skepdic.com/akinesiology.html">applied kinesiology</a>, and of some behaviors attributed to <a href="http://www.skepdic.com/hypnosis.html">hypnotic suggestion</a>, are due to ideomotor action. <br />
<br />
Ray Hyman (<a href="http://www.quackwatch.org/01QuackeryRelatedTopics/ideomotor.html">"How People Are Fooled by Ideomotor Action"</a>) has demonstrated the seductive influence of ideomotor action on medical quackery, where it has produced such appliances as the "<a href="http://www.quackwatch.org/01QuackeryRelatedTopics/Tests/toftness.html">Toftness Radiation Detector</a>" (used by <a href="http://www.skepdic.com/chiro.html">chiropractors</a>) and "black boxes" used in medical <a href="http://www.homeoint.org/morrell/british/radionic.htm">radiesthesia</a> and <a href="http://www.skepdic.com/radionics.html">radionics</a> (popular with <a href="http://www.skepdic.com/natpathy.html"> naturopaths</a> to harness "<a href="http://www.skepdic.com/energy.html">energy</a>" used in diagnosis and healing.) Hyman also argues that such things as <a href="http://www.skepdic.com/chikung.html"> Qi Gong</a> and "pulse diagnosis," popular in both Traditional Chinese Medicine and <a href="http://www.skepdic.com/ayurvedic.html">Ayurvedic medicine</a> as allegedly practiced by <a href="http://www.skepdic.com/chopra.html">Deepak Chopra</a>, are best explained in terms of ideomotor action and require no supposition of mysterious energies such as <a href="http://www.skepdic.com/chi.html"> chi</a>. <br />
<a name='more'></a><br />
<br />
The term "ideomotor action" was coined by William B. Carpenter in 1852 in his explanation for the movements of rods and pendulums by <a href="http://www.skepdic.com/dowsing.html">dowsers</a> and some table turning or lifting by <a href="http://www.skepdic.com/medium.html">spirit mediums</a> (the ones that weren't accomplished by cheating). Carpenter argued that muscular movement can be initiated by the mind independently of volition or emotions. We may not be aware of it, but suggestions can be made to the mind by others or by observations. Those suggestions can influence the mind and affect motor behavior. <br />
<br />
Scientific tests by American psychologist <a href="http://www.skepdic.com/refuge/blum.html">William James</a>, French chemist <a href="http://en.wikipedia.org/wiki/Michel_Eugene_Chevreul">Michel Chevreul</a>, English scientist <a href="http://en.wikipedia.org/wiki/Michael_Faraday">Michael Faraday</a> (<a href="http://www.amazon.com/exec/obidos/ISBN=0805805087/roberttoddcarrolA/">Leonard Zusne and Warren H. Jones, <i>Anomalistic Psychology: A Study of Magical Thinking</i></a>, p. 111), and American psychologist <a href="http://www.quackwatch.org/01QuackeryRelatedTopics/ideomotor.html">Ray Hyman</a> have demonstrated that many phenomena attributed to spiritual or paranormal forces, or to mysterious "<a href="http://www.skepdic.com/energy.html">energies</a>," are actually due to ideomotor action. Furthermore, these tests demonstrate that "honest, intelligent people can unconsciously engage in muscular activity that is consistent with their expectations" (<a href="http://www.quackwatch.org/01QuackeryRelatedTopics/ideomotor.html">Hyman</a>). The tests also show that suggestions that guide behavior can be given by subtle cues (Ray Hyman, "<a href="http://www.skepdic.com/Hyman_cold_reading.htm">Cold reading: how to convince strangers that you know all about them,</a>" <i>Zetetic</i>. 1977; 1(2):18-37).Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com2tag:blogger.com,1999:blog-8778506545780706793.post-77132907891310525852012-10-24T10:15:00.000-07:002012-10-29T15:10:06.680-07:00cognitive dissonance<div align="left">
Cognitive dissonance is a theory of human motivation that
asserts that it is psychologically uncomfortable to hold contradictory
cognitions. The theory is that dissonance, being unpleasant, motivates a
person to change his cognition, attitude, or behavior. This theory was first
explored in detail by social psychologist Leon Festinger, who described it
this way:</div>
<blockquote>
<div align="left">
Dissonance and consonance are relations among cognitions
that is, among opinions, beliefs, knowledge of the environment, and
knowledge of one's own actions and feelings. Two opinions, or beliefs, or
items of knowledge are dissonant with each other if they do not fit
together; that is, if they are inconsistent, or if, considering only the
particular two items, one does not follow from the other (Festinger 1956:
25).</div>
</blockquote>
<div align="left">
He argued that there are three ways to deal with cognitive
dissonance. He did not consider these mutually exclusive.</div>
<blockquote>
<ol>
<li>One may try to change one or more of the beliefs,
opinions, or behaviors involved in the dissonance;</li>
<li>One may try to acquire new information or beliefs that
will increase the existing consonance and thus cause the total dissonance to
be reduced; or,</li>
<li>One may try to forget or reduce the importance of those
cognitions that are in a dissonant relationship (Festinger 1956: 25-26).<a name='more'></a></li>
</ol>
</blockquote>
<div align="left">
For example, people who smoke know smoking is a bad habit.
Some rationalize their behavior by looking on the bright side:
They tell themselves that smoking helps keep the weight down and that there is a greater threat to health
from being overweight than from smoking. Others quit smoking.
Most of us are clever enough to come up with
<a href="http://www.skepdic.com/adhoc.html">ad hoc hypotheses</a> or
rationalizations to save cherished notions. Why we can't apply this cleverness
more competently is not explained by noting that we are led to rationalize
because we are trying to reduce or eliminate cognitive dissonance. Different
people deal with psychological discomfort in different ways. Some ways are
clearly more reasonable than others. So, why do some people react to
dissonance with cognitive competence, while others respond with cognitive
incompetence? </div>
<div align="left">
<br /></div>
<div align="left">
Cognitive dissonance has been called "the mind controller's
best friend" (Levine 2003: 202). Yet, a cursory examination of cognitive
dissonance reveals that it is not the dissonance, but how people deal with
it, that would be of interest to someone trying to control others when the
evidence seems against them. </div>
<div align="left">
<br /></div>
<div align="left">
For example, Marian Keech (real name: Dorothy Martin) was the leader of a UFO cult in
the 1950s. She claimed to get messages from <a href="http://www.skepdic.com/ufos_ets.html">
extraterrestrials</a>, known as The Guardians, through
<a href="http://www.skepdic.com/autowrite.html">automatic writing</a>. Like the
<a href="http://www.skepdic.com/refuge/bunk3.html">Heaven's Gate</a> folks
forty years later,
Keech and her followers, known as The Seekers or The Brotherhood of the
Seven Rays, were waiting to be picked up by <a href="http://www.skepdic.com/saucers.html">flying saucers</a>. In Keech's
prophecy, her group of eleven was to be saved just before the earth was to
be destroyed by a massive flood on December 21, 1954. When it became evident
that there would be no flood and the Guardians weren't stopping by to pick
them up, Keech</div>
<blockquote>
became elated. She said she'd just received a <a href="http://www.skepdic.com/telepath.html">
telepathic</a> message from the Guardians saying that her group of
believers had spread so much light with their unflagging faith that God
had spared the world from the cataclysm (Levine 2003: 206). </blockquote>
<div style="text-align: left;">
More important, the Seekers didn't abandon her. Most became <i>more</i>
devoted after the failed prophecy. (Only two left the cult when the world
didn't end.) "Most disciples not only stayed but, having made that decision,
were now even more convinced than before that Keech had been right all
along....Being wrong turned them into <a href="http://www.skepdic.com/truebeliever.html">true
believers</a> (ibid.)." Some people will go to bizarre lengths to avoid inconsistency
between their cherished beliefs and the facts. But why do people interpret the
same evidence in contrary ways? </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
The Seekers would not have waited for the flying saucer if they thought it
might not come. So, when it didn't come, one would think that a competent thinker would have seen
this as <i>falsifying</i> Keech's claim that it would come. However, the incompetent thinkers
were rendered incompetent by their devotion to Keech. Their belief that a
flying saucer would pick them up was based on <i><a href="http://www.skepdic.com/faith.html">faith</a></i>,
not evidence. Likewise, their belief that the failure of the prophesy
shouldn't count against their belief was another act of faith. With this kind
of irrational thinking, it may seem pointless to produce evidence to try to
persuade people of the error of their ways. Their belief is not based on
evidence, but on devotion to a person. That devotion can be so great that even
the most despicable behavior by one's prophet can be rationalized. There are
many examples of people so devoted to another that they will rationalize or
ignore extreme mental and physical abuse by their <a href="http://www.skepdic.com/cults.html">cult</a>
leader (or spouse or boyfriend). If the basis for a person's belief is irrational faith grounded in
devotion to a powerful personality, then the only
option that person has when confronted with evidence that should undermine her faith
would seem to be to continue to be irrational, unless her faith was not that
strong to begin with. The interesting question, then, is not about
cognitive dissonance but about faith. What was it about Keech that led some
people to have faith in her and what was it about those people that made them
vulnerable to Keech? And what was different about the two who left the cult?</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
"Research shows that three characteristics are related to persuasiveness:
perceived authority, honesty, and likeability" (ibid. 31). Furthermore, if a
person is physically attractive, we tend to like that person and the more we
like a person the more we tend to trust him or her (ibid. 57). Research also
shows that "people are perceived as more credible when they make eye contact
and speak with confidence, no matter what they have to say" (ibid. 33).</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
According to Robert Levine, "studies have uncovered surprisingly little
commonality in the type of personality that joins cults: there's no single
cult-prone personality type" (ibid. 144). This fact surprised Levine. When he
began his investigation of cults he "shared the common stereotype that most
joiners were psychological misfits or religious fanatics" (ibid. 81). What he
found instead was that many cult members are attracted to what appears to be a
loving community. "One of the ironies of cults is that the craziest groups are
often composed of the most caring people (ibid. 83)." Levine says of cult leader Jim
Jones that he was "a supersalesman who exerted most every rule of persuasion"
(ibid. 213). He had authority, perceived honesty, and likeability. It is
likely the same could be said of Marian Keech. It also seems likely that many
cult followers have found a surrogate family and a surrogate mother or father
or both in the cult leader.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
It should also be remembered that in most cases people have not arrived at
their irrational beliefs overnight. They have come to them over a period of
time with gradually escalated commitments (ibid. chapter 7). Nobody would join
a cult if the pitch were: "Follow me. Drink this poisoned-but-flavored
water and
commit suicide." Yet, not everybody in the cult drank the poison and two
of Keech's followers quit the cult when the prophecy failed. How were they
different from the others? The explanation seems simple: their faith in their
leader was weak. According to Festinger, the two who left Keech--Kurt Freund
and Arthur Bergen--were lightly committed to begin with (Festinger 1956: 208).</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Even people who erroneously think their beliefs are scientific may come
by their notions gradually and their commitment may escalate to the point of
irrationality. Psychologist Ray Hyman provides a very interesting example of
cognitive dissonance and how one <a href="http://www.skepdic.com/chiro.html">chiropractor</a> dealt
with it.</div>
<blockquote>
Some years ago I participated in a test of
<a href="http://www.skepdic.com/akinesiology.html">applied kinesiology</a> at Dr. Wallace Sampson's medical office in Mountain
View, California. A team of chiropractors came to demonstrate the
procedure. Several physician observers and the chiropractors had agreed
that chiropractors would first be free to illustrate applied kinesiology
in whatever manner they chose. Afterward, we would try some
<a href="http://www.skepdic.com/control.html">double-blind tests</a> of their claims. </blockquote>
<blockquote>
The chiropractors presented as their major
example a demonstration they believed showed that the human body could
respond to the difference between glucose (a "bad" sugar) and fructose (a
"good" sugar). The differential sensitivity was a truism among
"alternative healers," though there was no scientific warrant for it. The
chiropractors had volunteers lie on their backs and raise one arm
vertically. They then would put a drop of glucose (in a solution of water)
on the volunteer's tongue. The chiropractor then tried to push the
volunteer's upraised arm down to a horizontal position while the volunteer
tried to resist. In almost every case, the volunteer could not resist. The
chiropractors stated the volunteer's body recognized glucose as a "bad"
sugar. After the volunteer's mouth was rinsed out and a drop of fructose
was placed on the tongue, the volunteer, in just about every test,
resisted movement to the horizontal position. The body had recognized
fructose as a "good" sugar. </blockquote>
<blockquote>
After lunch a nurse brought us a large number
of test tubes, each one coded with a secret number so that we could not
tell from the tubes which contained fructose and which contained glucose.
The nurse then left the room so that no one in the room during the
subsequent testing would consciously know which tubes contained glucose
and which fructose. The arm tests were repeated, but this time they were
double-blind -- neither the volunteer, the chiropractors, nor the
onlookers was aware of whether the solution being applied to the
volunteer's tongue was glucose or fructose. As in the morning session,
sometimes the volunteers were able to resist and other times they were
not. We recorded the code number of the solution on each trial. Then the
nurse returned with the key to the code. When we determined which trials
involved glucose and which involved fructose, there was no connection
between ability to resist and whether the volunteer was given the "good"
or the "bad" sugar. </blockquote>
<blockquote>
When these results were announced, the head
chiropractor turned to me and said, "You see, that is why we never do
double-blind testing anymore. It never works!" At first I thought he was
joking. It turned it out he was quite serious. Since he "knew" that
applied kinesiology works, and the best scientific method shows that it
does not work, then -- in his mind -- there must be something wrong with
the scientific method. (<a href="http://www.quackwatch.org/01QuackeryRelatedTopics/ideomotor.html">Hyman 1999</a>) </blockquote>
<div style="text-align: left;">
What distinguishes the chiropractor's rationalization from the cult
member's is that the latter is based on pure faith and devotion to a guru or
prophet, whereas the former is based on evidence from experience. Neither
belief can be falsified because the believers won't let them be falsified:
Nothing can count against them. Those who base their beliefs on experience
and what they take to be empirical or scientific evidence (e.g., <a href="http://www.skepdic.com/astrolgy.html">astrologers</a>, <a href="http://www.skepdic.com/palmist.html">palm
readers</a>, <a href="http://www.skepdic.com/medium.html">mediums</a>, <a href="http://www.skepdic.com/psychic.html">
psychics</a>, the <a href="http://www.skepdic.com/intelligentdesign.html">intelligent design</a>
folks, and the chiropractor) make a pretense of being willing to test
their beliefs. They only bother to submit to a test of their ideas to get
proof for others. That is why we refer to their beliefs as
<a href="http://www.skepdic.com/pseudosc.html">pseudosciences</a>. We do not refer to the beliefs
of cult members as pseudoscientific, but as faith-based irrationality.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
There is scant evidence that the
chiropractors Wally Sampson and Ray Hyman tested take the
stand they do in order to relieve cognitive dissonance. They didn't
just
reject the results of a single test, they rejected
scientific testing altogether in favor of what they think they
know from personal experience. Why? Because they consider personal
experience
superior to double-blind controlled experiments. Why? To avoid
having to deal with cognitive dissonance? What evidence is there that
these chiropractors were made the least bit uneasy by holding a
belief that
conflicts with the rest of the scientific community? If a person is
made
psychologically uncomfortable by contradictory cognitions, shouldn't
there
be some way to measure this discomfort, such as a rise in the level
of
cortisol or other stress hormones? Has anyone defending cognitive
dissonance
ever measured stress hormones being aroused by dissonant beliefs or
relieved
by rationalization? The chiropractors' misguided belief
is probably not due to worrying about their self-image or removing
discomfort. It is more likely due to their being arrogant and
incompetent
thinkers, convinced by their experience that they "know" what's
going on,
and probably assisted by <a href="http://www.skepdic.com/communalreinforcement.html">
communal reinforcement</a> from the like-minded arrogant and incompetent
thinkers they work with and are trained by. They've seen how AK works with
their own eyes. They've demonstrated it many times. If anything makes them
uncomfortable it might be that they can't understand how the world can be so
full of idiots who can't see with their own eyes what they see!</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
To return to Festinger's own
example, what is gained by saying that the two who left the cult had a <i>
light</i> commitment to begin with? How is commitment measured? Do those who
see the light and change their mind when the evidence contradicts their belief have a
<i>light</i> belief. If we apply <a href="http://www.skepdic.com/occam.html">Occam's razor</a> to the theory of
cognitive dissonance, is there anything left after we explain how anyone
deals with beliefs that conflict with the evidence by the more familiar
concepts of <i>changing one's mind in light of new evidence, rationalization, <a href="http://www.skepdic.com/selfdeception.html">self-deception</a>, irrational faith,
<a href="http://www.skepdic.com/confirmbias.html">confirmation bias</a>, overestimation of one's
intelligence and abilities</i>, and the like?
I don't think so. We shouldn't forget that some people, when
confronted with strong evidence against cherished beliefs, give up
their cherished beliefs, e.g., the "distinguished stratigraphy
professor" at Columbia University, praised by Stephen Jay Gould, who had
initially ridiculed the theory of drifting continents but “spent his
last years joyously redoing his life’s work” (<i>Ever Since Darwin</i>, W.W. Norton & Company, 1979: 160).</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Can we really
explain why Sylvia Browne or the members of the military junta in Myanmar
can sleep at night (assuming they do!) by appealing to the "theory of cognitive dissonance"?
There are people who know what they are doing is wrong and don't care. Even
a simple case that is often brought up by the defenders of the theory of
cognitive dissonance — the case of the smoker who continues his habit of
smoking even though he knows smoking is unhealthy — doesn't measure up. What
is so cognitively uncomfortable about knowing that smoking is unhealthy and
doing it anyway? </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
There are people who
know what they are doing is wrong, but they have such contempt for the rest
of us that it doesn't make them the slightest bit uncomfortable conning us.
What evidence is there that people who do bad things or believe what they
should know is false are concerned about their self-image? Do mafia hit men
have to deal with cognitive dissonance so they can sleep at night? I'd like
to see the empirical study on that one. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
If cognitive dissonance were a problem, it would show up at the level
of methods used to evaluate beliefs. Yet, many people seem to have no
discomfort using science, logic, and reason to establish one set of beliefs,
while using desire, feelings, faith, emotional attachment to a charismatic
leader, and the like to establish another set of beliefs.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
On the other hand, who am I to disagree
with more than a half-century of scholarship in the social sciences that
has firmly established the concept of cognitive dissonance? As the
authors of the <a href="http://en.wikipedia.org/wiki/Cognitive_dissonance">Wikipedia article on the topic</a>
write: "It is one of the most influential and extensively studied
theories in social psychology." I don't deny that the concept has been
influential. Nor do I deny that it has been extensively studied. What I
see, however, when I look at the kinds of studies used to support the
validity of the concept is a lot of <a href="http://www.skepdic.com/confirmbias.html">confirmation bias</a> and something akin to the<a href="http://www.skepdic.com/psiassumption.html"> psi assumption</a>
in parapsychology. The general form of the studies in support of
cognitive dissonance goes like this: we predict that x will happen if we
do y; if x happens when we do y it is because of cognitive dissonance; x
happened when we did y, so cognitive dissonance is confirmed. <a href="http://www.blogger.com/blogger.g?blogID=8778506545780706793" id="return" name="return"></a>What
I don't see is any attempt to formulate a test of the hypothesis that
could falsify the claim that cognitive dissonance causes anything.
<a href="http://web.archive.org/web/20110401015036/http://vincentvanveen.net/Documents/van_Veen_NatureNeuro_2009.pdf">Researchers even go so far as to claim evidence for cognitive dissonance by finding activity (using an fMRI) in the dorsal anterior cingulate cortex and anterior insula during a test that postulated that cognitive dissonance was occurring when those parts of the brain showed activity.</a>
This reasoning seems circular at best. It begs the question. Of the
innumerable possible explanations for seeing what was seen in the fMRIs,
why should we assume they indicated cognitive dissonance?<br />
<br />
Festinger and Carlsmith claimed to have found evidence for cognitive dissonance in their 1959 study <a href="http://psychclassics.yorku.ca/Festinger/">Cognitive Consequences of Forced Compliance</a>.
Their database consisted of data collected on 71 male students in the
introductory psychology course at Stanford University who were "required
to spend a certain number of hours as subjects (Ss) in experiments."
(The data for 60 of the students was used in the final calculations, 20
subjects in each of three groups. In other words, this was a very small
study from which no grand conclusions should have been drawn.) They
spent an hour doing some boring, tedious task like turning pegs a
quarter turn repeatedly. It was assumed that doing something pointless
for an hour would generate a strong negative attitude regarding the
task. Unless you have special neural wiring, it seems reasonable to assume that you
would be bored by the task, but whether you would develop a strong
negative attitude toward it seems questionable. After all, you are in a
psych class, you're trying to learn something, and participation in an
experiment is a course requirement. Anyway, after completing the boring
task for an hour some of the subjects were asked to talk to someone
introduced as another subject in the experiment but actually an actor,
and try to persuade him that the task was interesting and engaging. Some
subjects were paid $20; some were paid $1. (Today, you might get 4
pints of beers for $20; in 1959 you could probably get 100 pints of beer
for $20. In other words, to most college students in 1959, $20 would
have represented a small windfall. Consider, however, that these are
Stanford students in 1959, many of whom may not have found much
difference between $1 and $20.) One group of subjects was used as a
control; these subjects weren't asked to talk to anybody about the task.<br />
<br />
At the end of the study, the subjects were asked to rate "how
enjoyable" the boring tasks were on a scale of -5 to +5. The average
rating for the 20 students in the control group was -.45; the average
for those paid $20 was -.05; and the average for those paid $1 was
+1.35.<br />
<blockquote>
This was explained by Festinger and Carlsmith as evidence for
cognitive dissonance. The researchers theorized that people experienced
dissonance between the conflicting cognitions, "I told someone that the
task was interesting" but "I actually found it boring." When paid only
$1, students were forced to internalize the attitude they were induced
to express, because they had no other justification. Those in the $20
condition, however, had an obvious external justification for their
behavior, and thus experienced less dissonance.<a href="http://en.wikipedia.org/wiki/Cognitive_dissonance">*</a></blockquote>
The difference in results might also have been a fluke. The
eleven students whose data was not included were rejected for a variety
of reasons, but none of them was rejected because he was an outlier.
With a small group of only 20 students being averaged, a couple of
outliers would skew the average. I'm not saying that is what happened in
the $1 group, but just a couple of high ratings could account for the
higher average than the other two groups. On the other hand, the
difference in ratings might be due to something besides cognitive
dissonance. Maybe it was due to psychic influence from a paranormal lab
across the country. Unlikely, sure, but the authors are just assuming
the different ratings can be explained by what they were trying to
establish. I don't know why the $1 group rated the boring task as
significantly more enjoyable than the other two groups, but I'm not
convinced it had anything to do with cognitive dissonance.<br />
<br />
Consider also that when the subjects were asked how much they
learned on a scale of 0-10, the groups rated themselves about equally at
about 3. If the $1 group had rated their learning at 5, would that have
been taken as evidence of cognitive dissonance? The stat I find the most
interesting, however, is the one regarding whether the subjects would
participate in a similar experiment in the future. None of the groups
was very enthusiastic about doing so, but the $1 group was significantly
more willing to do so that the other two groups. On a scale of -5 to
+5, the $1 group averaged +1.2, while the control and $20 groups
averaged -0.62 and -.025 respectively. Again, an outlier or two in the
$1 group might be the main reason for the difference in averages. Or
there might be some other reason. With such a small sample, it would
seem reasonable to suspect that there might be some other difference
between the $1 group and the others that has nothing to do with
cognitive dissonance. In any case, even if this study were redone with
the same results using 600 subjects, I would still question whether the
differences should be explained by cognitive dissonance. Paying people a
little bit of money to do a trivial task and then lie about it to
someone else might not require any justification in the context of a
psychology experiment at Stanford University. After all, it's just an
experiment. Paying people a lot of money may have created less incentive
by making the task less enjoyable. A token payment may have created the
illusion that the subjects were making an important contribution to
science.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Also, as we learn more about the
fundamental tendency of human behavior to be irrational much of the
time, is there really a need for a theory like cognitive dissonance to
explain why human beings are influenced to do or believe the things they
do? I assume most Christians believe that 1 + 1 + 1 = 3, yet many of
them believe that Abraham's god is one being but three persons. They
also believe that the divine nature transcends anything in the natural
world and is incompatible with human nature, yet many believe that Jesus
was both a god and a man. Finally, Catholics know that if something has all
the properties of bread or wine, it would be absurd to say either is a
duck or a train; yet, they believe that some bread and some wine look
like bread and wine but are actually the body, blood, soul, and divinity
of Jesus. None of these folks seem the least bit bothered
psychologically by these contradictory beliefs. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Finally, there must be many survivors of
the 9.0 earthquake and consequent tsunami that devastated Japan on
March 11, 2011, who believed in the basic goodness of a god or of nature
before that date. What predictions about the beliefs of these people
does cognitive dissonance make? And how would a social scientist tease
out the discomfort they must feel that is due to what happened to them,
their loved ones, and their neighbors and the discomfort that is due to
cognitive dissonance? Would an fMRI help separate various forms of
psychological discomfort? Am I criticizing hundreds of social scientists
because I am made psychologically uncomfortable by their theory since
it conflicts with what I believe to be true? Am I relieving my cognitive
dissonance by rejecting the concept of cognitive dissonance? And was I
kind to my father not because I loved him but because of the cognitive
dissonance I felt due to an Oedipus complex? Would an fMRI settle the
question?</div>
Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com5tag:blogger.com,1999:blog-8778506545780706793.post-65730939163836459862012-10-15T07:00:00.000-07:002012-10-16T09:03:06.496-07:00begging the questionBegging the question is a fallacy in reasoning whereby one assumes what one claims to be proving.<br />
<br />
An argument is a form of reasoning in which one gives a reason or reasons in support of some claim. The reasons are called premises and the claim one tries to support with them is called the conclusion.<br />
<br />
If one's premises entail one's conclusion, and one's premises are questionable, one is said to beg the question.<br />
<br />
The following argument begs the question.
<blockquote class="tr_bq">
We know a god exists because we can see the perfect order of creation, an order which demonstrates supernatural intelligence in its design.</blockquote>
The conclusion of this argument is that a god exists. The premise assumes a creator and designer of the universe exists, i.e., that a god exists. In this argument, the arguer should not be granted the assumption that the universe exhibits intelligent design, but should be made to provide support for that claim.<br />
<a name='more'></a><br />
<br />
The following argument also begs the question.
<blockquote class="tr_bq">
Abortion is the unjustified killing of a human being and as such is murder. Murder is illegal. So abortion should be illegal.</blockquote>
The conclusion of the argument is entailed in its premises. If one assumes that abortion is murder then it follows that abortion should be illegal because murder is illegal. Thus, the arguer is assuming abortion should be illegal (the conclusion) by assuming that it is murder. In this argument, the arguer should not be granted the assumption that abortion is murder, but should be made to provide support for this claim. (Since murder is the unjustified killing of a human being, the arguer must prove that every abortion is an unjustified killing. Even if one grants that every abortion is the killing of a human being--which many would not grant, of course--it does not follow that every abortion is an <i>unjustified</i> killing.)<br />
<br />
The following is another example of begging the question.
<blockquote class="tr_bq">
Paranormal phenomena exist because I have had experiences that can only be described as paranormal.</blockquote>
The conclusion of this argument is that paranormal phenomena exist. The premise assumes that the arguer has had paranormal experiences, and therefore assumes that paranormal experiences exist. The arguer should not be granted the assumption that his experiences were paranormal, but should be made to provide support for this claim.<br />
<br />
Here is another example of begging the question.
<blockquote class="tr_bq">
Past-life memories of children prove that past lives exist because the children could have no other source for their memories besides having lived in the past. </blockquote>
The conclusion of this argument is that past lives exist. The premise assumes that children have had past lives. The arguer should not be granted the assumption that children have had past lives but should be made to support the claim. (Saying the memories could have no other source than a past life is to assume that past lives exist. This should not be granted, but argued for.)<br />
<br />
Another example of begging the question is provided by <a href="http://www.cosmicfingerprints.com/ifyoucanreadthis.htm">Perry Marshall</a>:
<blockquote class="tr_bq">
1) DNA is not merely a molecule with a pattern; it is a code ... and an information storage mechanism. <br />
<br />
2) All codes are created by a conscious mind; there is no natural process known to science that creates coded information. <br />
<br />
3) Therefore DNA was designed by a mind.</blockquote>
Marshall assumes what he should be proving, namely, that all codes are created by a conscious mind.<br />
<br />
(note: For some unknown reason, some people use the expression "begs the question" to mean something like "raises the question." This usage has nothing to do with the logical fallacy of begging the question.) Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com5tag:blogger.com,1999:blog-8778506545780706793.post-58797657834380035802012-10-08T07:30:00.000-07:002012-10-08T07:54:32.026-07:00irrelevant appeal to traditionThe irrelevant appeal to tradition is a fallacy in reasoning in which one argues that a practice or a belief is justifiable simply because it has a long and established history. An example of this fallacy can be found in an article by Valerie Reiss on how to choose a psychic: <br />
<br />
<blockquote class="tr_bq">
Christianity sees <a href="http://www.skepdic.com/divinati.html">divination</a> as going against the Bible's mandate not to seek "soothsayers," because that would be expressing a lack of faith in God as omnipotent and all-knowing. Yet many ... of the world's religions and cultures have woven it into their fiber--Hinduism uses Vedic astrology to match marriage partners; in Chinese culture, an expert is consulted on the most mundane to crucial life matters--from when to get married to where to live. Wanting to know what will happen is not just a result of our modern brains grasping for control and answers; it's been the human condition for millennia, people have been seeking prophecies since Greeks took often long journeys to consult the <a href="http://www.scientificamerican.com/article.cfm?id=gaseous-emissions-at-orac">Oracle at Delphi</a>. (<a href="http://blog.beliefnet.com/freshliving/2009/06/10-things-to-know-before-going-to-a-psychic.html">"5 Things to Know Before Going to a 'Psychic'</a>") <br />
<a name='more'></a></blockquote>
<br />
Reiss argues that since divination has been practiced for millennia in various cultures, it must be good despite what some Christians might say is forbidden by the Bible. The fact that some cultures have been engaging in <a href="http://www.skepdic.com/magicalthinking.html">magical</a> and <a href="http://www.skepdic.com/superstition.html"> superstitious</a> thinking for thousands of years does not justify the practice, any more than thousands of years of slavery or abuse of women would justify those practices. Humans have been beating each other to death in boxing matches for millennia, but that hardly justifies the practice. <br />
<br />
The fact that Vedic astrology is still practiced in Hinduism isn't a good reason for thinking that this is a good thing. In fact, <a href="http://www.indianrationalists.blogspot.com/">it's a bad thing</a>. There is no compelling evidence that any kind of <a href="http://www.skepdic.com/astrolgy.html">astrology</a> is useful for divining the future, and the belief in this superstition is an open door to fraud and corruption in India (see <a href="http://www.eagletv.co.uk/home/guru.htm">Guru Busters</a> for an example of one of the corrupt godmen astrologers who asks his followers on national television to kill those who exposed his scam). Ms. Reiss might consider how she would feel if her marriage was arranged by an astrologer. There might be a better way. <br />
<br />
Reiss doesn't mention what experts are consulted in Chinese culture, but it is apparent that she is referring to various kinds of soothsayers. These "experts" bank on the ignorance and superstition of their clients. Perhaps one doesn't need any kind of expert to advise them on when to get married or where to live. <br />
<br />
Surely Ms. Reiss is not advising 21st century people to return to <a href="http://en.wikipedia.org/wiki/Pederasty_in_ancient_Greece"> the ways of the ancient Greeks</a>. I doubt if too many modern Greeks consult temple <a href="http://www.skepdic.com/oracles.html">oracles</a> for advice on anything, but if they did they might consider that there are much better ways of getting information about the future. We've come a long way since the days of <a href="http://en.wikipedia.org/wiki/Cassandra">Cassandra</a>. We have a bit more knowledge than the ancient Greeks did about how things happen and why. Using that knowledge to reason inductively about the future, guided by techniques that have been refined over many centuries, has proven to be vastly superior to any form of divination provided by <a href="http://www.skepdic.com/psychic.html">psychics</a>, <a href="http://www.skepdic.com/intuitive.html">intuitives</a>, or other soothsayers. <br />
<br />
The number of years that something has been practiced, in itself, does not justify that practice. The fact that magical thinking persists in many areas of modern life does not mean that magical thinking is superior to other methods. Rather than be guided by the inferior methods of our ancestors, we would be better off if we tried to understand why these primordial ways of evaluating experience persist and what we might do to overcome the tendency to think like our ignorant predecessors. Rather than rejoicing in ancient errors, we might do better to train ourselves in ways of overcoming our tendencies to fallacious thinking. <br />
<br />
Finally, one wonders why Ms. Reiss doesn't see that even though the Christians base their aversion to soothsaying on an <a href="http://www.skepdic.com/authorty.html"> appeal to authority</a>, their counter-tradition nullifies her appeal to tradition. Or is Ms. Reiss arguing that three traditions trump one tradition? If she is, she's also committing the <a href="http://www.skepdic.com/adpopulum.html"> ad populum fallacy</a>.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com1tag:blogger.com,1999:blog-8778506545780706793.post-80400225567822613712012-10-01T06:00:00.000-07:002012-10-01T07:43:59.601-07:00wishful thinkingWishful thinking is interpreting facts, reports, events, perceptions, etc., according
to what one would <i>like</i> to be the case rather than according to the actual evidence. Wishful thinking is often coupled with <a href="http://59ways.blogspot.com/2012/08/self-deception.html">self-deception</a>. A person who is afraid of surgery and believes that chemotherapy is a hoax perpetrated by Big Pharma and the AMA may <i>want </i>to believe that <a href="http://www.skepdic.com/alkalinediet.html">the alkaline diet</a> or <a href="http://www.skepdic.com/gersontherapy.html">Gerson therapy</a> is her best chance at cancer survival, despite the lack of scientific evidence for either of those so-called alternative treatments. Her desire to believe in alternatives to surgery and chemotherapy may lead her to ignore the evidence in favor of the treatment recommended by science-based medical doctors. She may be taken in by glorious stories of people who were diagnosed with this or that kind of cancer which went away <i>after</i> doing the alternative treatment. The stories may be true, but the causal link between the alternative treatment and the remission of cancer is made in the mind of the believer. Rather than admit that just because one thing happened after another it isn't necessarily the case that the first thing caused the second, she believes that anyone who doesn't agree with her about the causal connection must be a shill for Big Pharma and the AMA.<br />
<br />
Wishful thinking should not be confused with <a href="http://www.skepdic.com/newthought.html"><i>positive thinking</i></a>, which, in its most absurd form is a kind of <a href="http://www.skepdic.com/magicalthinking.html">magical thinking</a> that involves trying to make things happen by <i>willing </i>them to happen. In its best form, positive thinking is hopeful and optimistic, but realistic.<br />
<a name='more'></a><br />
<br />
Wishful thinking sometimes evolves into <a href="http://59ways.blogspot.com/2012/05/motivated-reasoning.html">motivated reasoning</a>, which not only interprets data according to preferences but actually takes disconfirming data and turns it into confirming data.<br />
<blockquote class="tr_bq">
Motivated reasoning is a major obstacle for rational argument. If
someone wants to believe that asylum seekers are breaking the law, or if
someone wants to believe that virtually all of the world’s climate
scientists have conspired to make up a huge global “climate change
hoax”, then it is very difficult to change their minds even when the
actual evidence is very, very clear.<a href="http://theconversation.edu.au/where-does-misinformation-come-from-and-what-does-it-do-9885">*</a></blockquote>
When confronted with someone whose belief system seems to be built mainly on wishful thinking perhaps the best one can do is provide alternative interpretation(s) of the data without insisting that the believed interpretation is wrong. Direct challenges to such a belief system may <a href="http://59ways.blogspot.com/2012/02/backfire-effect.html">backfire</a>. Actually, even the mere <i>suggestion</i> that<a href="http://skepdic.com/essays/evaluatingexperience.html"> valuing personal experience over scientific facts and probabilities</a> might be harmful to your health is often met with self-serving dismissal.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com4tag:blogger.com,1999:blog-8778506545780706793.post-60500969604250172722012-09-24T06:00:00.000-07:002012-09-24T08:14:38.736-07:00hindsight bias<blockquote class="tr_bq">
<i>"The mind that makes up narratives about the past is a sense-making organ. When an unpredicted event occurs, we immediately adjust our view of the world to accommodate the surprise."</i>--Daniel Kahneman</blockquote>
Hindsight bias is the tendency to construct one's memory after the fact (or interpret the meaning of something said in the past) according to currently known facts and one's current beliefs. In this way, one appears to make the past consistent with the present and more predictive or predictable than it actually was. When a surprise event occurs and you say "I knew it all along," you probably didn't. Hindsight bias may be kicking in.<br />
<br />
Hindsight bias accounts for the tendency of believers in prophecies and psychic predictions to retrofit events to past oracular claims, however vague or obscure (<a href="http://www.skepdic.com/retroactiveclairvoyance.html">retroactive clairvoyance</a>). For example, after the Challenger space shuttle disaster that killed seven U.S. astronauts on January 28, 1986, hindsight bias was used by followers of Nostradamus to claim that he had predicted it in the following verse: <br />
<a name='more'></a><br />
<br />
D'humain troupeau neuf seront mis à part,<br />
De jugement & conseil separés:<br />
Leur sort sera divisé en départ,<br />
Kappa, Thita, Lambda mors bannis égarés.<br />
<br />
From the human flock nine will be sent away,<br />
Separated from judgment and counsel:<br />
Their fate will be sealed on departure<br />
Kappa, Thita, Lambda the banished dead err (I.81). <br />
<br />
Of course, to make the obscene retrodiction complete, Nostradamus's minions would have to speculate that teacher-astronaut Christa McAuliffe was pregnant with twins to make nine the total in the "flock." <a href="http://www.psychologicalscience.org/index.php/news/releases/i-knew-it-all-along-didnt-i-understanding-hindsight-bias.html">The belief that one can predict the future is often due to little more than the power of hindsight bias.</a><br />
<br />
Hindsight bias also seems to account for the tendency of many people to think they can explain events that weren't predicted after the events have happened. It is unacceptable to many people to think that major events like a respected Wall Street investment manager running a <a href="http://www.skepdic.com/ponzi.html">Ponzi scheme</a> that cost people perhaps as much as $50 billion wasn't predictable. If only somebody had paid attention to this and that detail, <a href="http://en.wikipedia.org/wiki/Bernard_Madoff">Bernard Madoff</a> could never have pulled it off. What is true is that a major impact event like this can be easily explained <i>after</i> the fact. The explanations may satisfy people and lead them to believe that they now understand how such an event happened, but there is no way to know whether collecting many facts and using them to explain what occurred will help prevent a similar event from happening in the future.<br />
<br />
Why do we engage in hindsight bias? There are several reasons. The way memory works explains why we sometimes misremember predicting things we didn't predict. Once we know something to be true, the mind can easily recontruct the past so that our memories jibe with what actually happened. Also, we have a natural desire to see events as orderly and predictable, rather than as random and unpredictable. Daniel Kahneman (<a href="http://www.amazon.com/exec/obidos/ISBN=0374275637/roberttoddcarrolA/"><i>Thinking, Fast and Slow</i></a>) explains:<br />
<br />
<blockquote class="tr_bq">
Your inability to reconstruct past beliefs will inevitably cause you to underestimate the extent to which you were surprised by past events. Baruch Fischhoff first demonstrated this “I-knew-it-all-along” effect, or hindsight bias, when he was a student in Jerusalem. Together with Ruth Beyth ... Fischhoff conducted a survey before President Richard Nixon visited China and Russia in 1972. The respondents assigned probabilities to fifteen possible outcomes of Nixon’s diplomatic initiatives. Would Mao Zedong agree to meet with Nixon? Might the United States grant diplomatic recognition to China? After decades of enmity, could the United States and the Soviet Union agree on anything significant? After Nixon’s return from his travels, Fischhoff and Beyth asked the same people to recall the probability that they had originally assigned to each of the fifteen possible outcomes. The results were clear. If an event had actually occurred, people exaggerated the probability that they had assigned to it earlier. If the possible event had not come to pass, the participants erroneously recalled that they had always considered it unlikely. Further experiments showed that people were driven to overstate the accuracy not only of their original predictions but also of those made by others. Similar results have been found for other events that gripped public attention, such as the O. J. Simpson murder trial and the impeachment of President Bill Clinton. The tendency to revise the history of one’s beliefs in light of what actually happened produces a robust cognitive illusion.</blockquote>
<br />
One danger of hindsight bias is that it might make a person overconfident in his ability to predict the future. <a href="http://www.kellogg.northwestern.edu/News_Articles/2012/hindsight-is-20-20.aspx">Neal Roese of the Kellogg School of Management at Northwestern University</a> and Kathleen Vohs of the Carlson School of Management at the
University of Minnesota found in their review of research on hindsight bias that it can make us overconfident in how certain we are
about our own judgments. Overconfident entrepreneurs are more likely to take on unjustified risky ventures.But you already knew that. As Kahneman notes: "Hindsight bias has pernicious effects on the evaluations of decision makers. It leads observers to assess the quality of a decision not by whether the process was sound but by whether its outcome was good or bad."<br />
<br />Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com6tag:blogger.com,1999:blog-8778506545780706793.post-38494465261668667242012-09-17T07:17:00.000-07:002012-09-21T08:43:08.869-07:00recency biasRecency bias is the tendency to think that trends and patterns we observe in the recent past will continue in the future. Predicting the future <i>in the short term</i>, even for highly changeable events like the weather or the stock market, according to events in the recent past, works fine much of the time. Predicting the future in the long term according to what has recently occurred has been shown to be no more accurate than flipping a coin in many fields, including meteorology, <a href="http://www.skepdic.com/economicforecasting.html">economics, investments,</a> technology assessment, demography, futurology, and organizational planning (<a href="http://www.amazon.com/exec/obidos/ISBN=0471358444/roberttoddcarrolA/">Sherden, <i>The Future Sellers</i></a>).<br />
<br />
Doesn't it strike you as odd that with all the intelligence supposedly going on that such things as the breakup of the Soviet Union, the crumbling of the Berlin wall, the former head of Sinn Fein meeting with the Queen of England, the worldwide economic collapse of recent years, the so-called Arab Spring, the recent <a href="http://www.washingtonpost.com/world/middle_east/muslim-rage-over-film-echoes-back-to-islams-internal-struggles/2012/09/16/774c8b44-0038-11e2-bbf0-e33b4ee2f0e8_story.html">attacks on U.S. embassies in several Muslim countries</a>, and a host of other significant historical events were not predicted by the experts? Wait, you say. So-and-so predicted this or that. Was it a lucky guess or was the prediction based on knowledge and skill? If the latter, we'd expect not just one correct prediction out of thousands, but a better track record than, say, flipping a coin. Find one expert who's consistently right about anything and we still have a problem. How can we be sure that this sharpshooter isn't just lucky? If thousands of people are making predictions, chance alone tells us that a few will make a right call now and then. The odds in favor of prediction success diminish the more events we bring in, but even someone who seems to defy the odds might be the one a million who gets lucky with a string of guesses. You flip the coin enough times and once in a while you will get seven heads in a row. It's not expected, but it is predicted by the laws of chance. Likewise with predicting how many hurricanes we'll have next year or what stocks to buy or sell this year.<br />
<a name='more'></a><br />
<br />
Recent events and trends are easier to remember and discern than either events in the distant past or unknown events that will occur in the future. Rather than do the hard work of studying the past or accepting the fact that despite our best efforts at predicting events far into the future for such things as the weather, technological advances, and population trends, these and many other areas of human interest are beyond our ability to predict at much better than chance levels or naive estimates. (A naive estimate of weather prediction uses either today's weather to predict tomorrow's weather or it uses a seasonal average to predict this season's weather.)<br />
<br />
Some of the most amazing technological advances that have occurred
during my lifetime weren't predicted by any of the experts: the
Internet, the personal computer, the smart phone, digital music, to name
just a few. And I'm still waiting, along with millions of others, for
my jet pack.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com2tag:blogger.com,1999:blog-8778506545780706793.post-37604025574318485532012-09-10T08:50:00.003-07:002012-09-12T18:17:21.147-07:00shoehorningShoehorning is the process of force-fitting some current affair into one's personal, political, or religious agenda. So-called <a href="http://www.skepdic.com/psychic.html">psychics</a> frequently shoehorn events to fit vague statements they made in the past. This is an extremely safe procedure, since they can't be proven wrong and many people aren't aware of how easy it is to make something look like confirmation of a claim after the fact, especially if you give them wide latitude in making the shoe fit. It is common, for example, for the defenders of such things as the <a href="http://www.skepdic.com/bibcode.html">Bible Code</a> or the "prophecies" of <a href="http://www.skepdic.com/nostrada.html">Nostradamus</a> to shoehorn events to the texts, thereby giving the illusion that the texts were accurate predictions.<br />
<br />
A classic example of psychic shoehorning is the case of Jeanne Dixon. In 1956 she told <i>Parade</i> magazine: "As for the 1960 election Mrs. Dixon thinks it will be dominated by labor and won by a Democrat. But he will be assassinated or
die in office though not necessarily in his first term." John F. Kennedy was elected and was assassinated in his first term. This fact was shoehorned to fit her broad prediction and her reputation was made as the psychic who predicted JFK's violent death. In 1960 she apparently forgot her earlier prediction because she then predicted that <a href="http://www.skepdic.com/dixon.html">JFK would fail to win the presidency</a>. Many psychic detectives take advantage of shoehorning their vague and ambiguous predictions to events in an effort to make themselves seem more insightful than they really are.<br />
<a name='more'></a><br />
<br />
Court TV exploited the interest in so-called psychic detectives with a series of programs, one featuring Greta Alexander. She said that a body had been dumped where there was a dog barking. The letter "s" would play an important role and there was hair separated from the body. She felt certain the body was in a specific area, although searchers found only a dead animal. She asked to see a palm print of the suspect—her specialty—and the detective brought one. She said that a man with a bad hand would find the body. Then searchers found a headless corpse, with the head and a wig nearby. The man who found it had a deformed left hand.<a href="http://www.crimelibrary.com/criminal_mind/forensics/psychics/7.html?sect=21">* </a>The letter 's' can be retrofitted to zillions of things. Many scenarios could be shoehorned to fit "hair separated from the body" and "bad hand." (Fans of psychics will overlook the fact that Alexander's reference to the bad hand was supposedly made after looking at the palm print of the victim.)<br />
<br />
After the terrorist attacks on the World Trade Center and the Pentagon on September 11, 2001, fundamentalist Christian evangelists Jerry Falwell and <a href="http://www.positiveatheism.org/hist/quotes/revpat.htm">Pat Robertson</a> shoehorned the events to their agenda. They claimed that "<a href="http://web.archive.org/web/20110129225306/http://www.salon.com/news/1998/03/cov_11news.html">liberal</a> civil liberties groups, feminists, homosexuals and abortion rights supporters bear partial responsibility...because their actions have turned God's [sic] anger against America."<a href="http://www.actupny.org/YELL/falwell.html">*</a> According to Falwell, his god allowed "the enemies of America...to give us probably what we deserve." Robertson agreed. The American Civil Liberties Union has "got to take a lot of blame for this," said Falwell and Robertson agreed. Federal courts bear part of the blame, too, said Falwell, because they've been "throwing God [sic] out of the public square." Also "abortionists have got to bear some burden for this because God [sic] will not be mocked," said Falwell and Robertson agreed. [Hear these men talk it out in <a href="http://www.skepdic.com/sounds/falwell.mp3">mp3</a>.] <br />
<br />
Neither Falwell nor Robertson has any way of proving any of their claims. But such claims can't be disproved, either. Their purpose is simply to call attention to their agenda and to get free publicity in the news media. It is a way to take advantage of the fear and anger of people without fear of being proved to be a liar. It is a hit and hide tactic, as no rebuttal is possible. One might respond, though, by saying that if there is an omniscient, all-powerful being who governs the universe, the likelihood that such a being would be allied with people like Falwell, Robertson, or suicide killers seems absurd on its face and unworthy of serious discussion. <br />
<br />
After one has been roundly criticized by nearly everyone on the planet for egregious shoehorning of the Falwell/Robertson type, it is typical of the hypocrites to issue denials and claim their statements were taken out of context. Falwell issued the following statement: "I sincerely regret that comments I made during a long theological discussion on a Christian television program yesterday were taken out of their context and reported, and that my thoughts--reduced to sound bites--have detracted from the spirit of this day of mourning." Robertson, however, is unrepentant, and has added Internet pornography to his list of things that have so angered his god that He had to murder thousands of innocent people to express His almighty displeasure. If we don't change our ways, he says, his god is going to kill a lot more of us. Thus, when the carnage mounts in the years ahead as the U.S. and its allies try to eliminate terrorism and the terrorists continue murdering the innocent, we can look back at the dead and say that Pat Robertson predicted it.<br />
<br />
Finally, astrology is probably the most widely practiced <a href="http://skepdic.com/superstition.html"> superstition</a> and most popular <a href="http://skepdic.com/toothfairyscience.html">Tooth Fairy science</a> in the world today. Nevertheless, there are many who defend astrology by pointing out how accurate professional horoscopes are. Astrology “works,” it is said, but what does that mean? Basically, to say astrology works means that there are a lot of satisfied customers. There are a lot of satisfied customers because, thanks to <a href="http://59ways.blogspot.com/2012/05/subjective-validation.html">subjective validation</a>, it is easy to shoehorn any event to fit a chart. To say astrology "works" does not mean that astrology is accurate in predicting human behavior or events to a degree significantly greater than mere chance. There are many satisfied customers who believe that their horoscope accurately describes them and that their astrologer has given them good advice. Such evidence does not prove astrology so much as it demonstrates the <a href="http://skepdic.com/forer.html">Forer effect</a> and <a href="http://59ways.blogspot.com/2012/08/confirmation-bias.html">confirmation bias</a>. Good astrologers give good advice, but that does not validate astrology. (They also make ambiguous claims like the oracle of Delphi who told Croesus before he attacked Persia: “If you cross the river, a great empire will be destroyed.” So armed, Croesus attacked, resulting in the destruction of his own empire.) There have been several studies that have shown that people will use <a href="http://skepdic.com/selectiv.html">selective thinking</a> to make any chart they are given fit their preconceived notions about themselves and their charts. Many of the claims made about signs and personalities are vague and would fit many people under many different signs. Even professional astrologers, most of whom have nothing but disdain for sun sign astrology, can’t pick out a correct horoscope reading at better than a chance rate. Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com10tag:blogger.com,1999:blog-8778506545780706793.post-12663055608938616752012-09-03T07:16:00.000-07:002012-09-03T15:29:47.423-07:00confabulationHave you ever told a story that you embellished by putting yourself at the center when you knew that you weren’t even there? Or have you ever been absolutely sure you remembered something correctly, only to be shown incontrovertible evidence that your memory was wrong? No, of course not. But you probably know or have heard of somebody else who juiced up a story with made-up details or whose confidence in his memory was shown to be undeserved by evidence that his memory was false. <br />
<br />
Confabulation is an unconscious process of creating a narrative that is believed to be true by the narrator but is demonstrably false. The term is popular in psychiatric circles to describe narratives of patients with brain damage or a psychiatric disorder who make statements about what they perceive or remember. The narratives are known to be either completely fictional or in great part fantasy, but they are believed to be true by the patients.<br />
<a name='more'></a><br />
<br />
Neurologist <a href="http://www.amazon.com/exec/obidos/ISBN=0684853949/roberttoddcarrolA/">Oliver Sacks writes of a patient with a brain disorder</a> that prevented him from forming new memories. Even though “Mr. Thompson” could not remember who Sacks was, each time Sacks visited him he created a fictional narrative about their previous encounters. Sometimes Sacks was a butcher Thompson knew when he worked as a grocer. A few minutes later, he’d recognize Sacks as a customer and create a new fictional narrative. Sacks described Thompson’s confabulations as an attempt to make meaning out of perceptions that he could only relate to events in long-term memory. <br />
<br />
You might think: poor fellow; he has to construct his memories and fill in the blank parts with stuff he makes up. Yes, he does. But so do you, and so do I. There is an <a href="http://www.amazon.com/exec/obidos/ISBN=0674566769/roberttoddcarrolA/">overwhelming amount of scientific evidence on memory that shows memories are constructed by all of us</a> and that the construction is a mixture of fact and fiction. Something similar is true for perception. Our perceptions are constructions that are a mixture of sense data processed by the brain and other data that the brain supplies to fill in the blanks. <br />
<br />
Now there is <a href="http://www.amazon.com/exec/obidos/ISBN=0262582716/roberttoddcarrolA/">a body of growing scientific research that shows confabulation is not something restricted to psychiatric patients or gifted fantasizers</a> who believe they were abducted by aliens for reproductive surgery. The evidence shows that many of the narratives each of us produce on a daily basis to explain how we feel, why we did something, or why we made a judgment that we made are confabulations, mixtures of fact and fiction that we believe to be completely true. <br />
<br />
This research should give us pause. Many of us accuse others of making stuff up when they present arguments that are demonstrably full of false or questionable claims, but it’s possible that people who make stuff up aren’t even aware of it. They might really believe the falsehoods they utter. <br />
<br />
For example, Paul Ryan was accused of <a href="http://www.cbsnews.com/8301-3460_162-57504979/democratic-aide-gop-thinks-lying-is-a-virtue/">lying</a> and <a href="http://www.examiner.com/article/ryan-s-marathon-deception-raises-questions-of-character-and-credibility">deception</a> in a speech he gave at the Republican National Convention. Here is what Ryan actually said:<br />
<br />
<div style="margin-left: 40px;">
My home state [Wisconsin] voted for President Obama. When he talked about change,
many people liked the sound of it, especially in Janesville, where we
were about to lose a major factory. </div>
<blockquote class="tr_bq">
A lot of guys I went to high school with worked at that GM plant. Right
there at that plant, candidate Obama said: “I believe that if our
government is there to support you … this plant will be here for another
hundred years.” That’s what he said in 2008. </blockquote>
<blockquote class="tr_bq">
Well, as it turned out, that plant didn’t last another year. It is
locked up and empty to this day. And that’s how it is in so many towns
today, where the recovery that was promised is nowhere in sight. </blockquote>
What was Ryan's point? What are the facts? And, was he lying or trying to be deceptive? Was his point that the economic recovery under Obama isn't working and the closure of the Janesville plant is just one example of what's happening in many towns where the recovery isn't happening? Was he implying that he and Romney support the idea of the government bailing out plants so they don't have to close, while Obama <i>says </i>he supports the idea but doesn't actually do it? Those who called Ryan a liar or deceptive thought his point was that Obama is a hypocrite and a failure because he implied he (i.e., the government) would support the workers but in fact Obama closed the plant down. (Ten months after his speech and three months before Obama became president, the Janesville factory ceased manufacturing SUVs and nearly all the plant's employees had been laid off; 50 stayed in the massive plant until late April, a few months after Obama became President. The plant is now considered "idle.") As far as I can tell, everything Ryan said is true. Was what he said deceptive? <a href="http://www.chicagomag.com/Chicago-Magazine/The-312/August-2012/Paul-Ryans-Deception-on-the-Janesville-GM-Plant-Actually-Its-Complicated/">Ryan took Obama's words out of context. </a>Candidate Obama went on in his speech at the Janesville plant to praise the development of hybrids and energy efficient vehicles. His speech was focused on retooling plants that close to support creating millions of jobs around clean, renewable energy. Obama gave an example from a nearby town where workers in a manufacturing plant that had closed and moved to Mexico were retrained to produce wind turbines. Obama has promised economic recovery. So did George W. Bush. Both emphasized that nobody can predict with accuracy how long the recovery will take. Since the Janesville plant is "idle" rather than completely closed down, there is still a chance that someday it will re-open, perhaps with a different product along the lines that Obama supports.<br />
<br />
I think what Ryan said is deceptive, but not for the reasons given by journalists like <a href="http://www.truthdig.com/report/item/ryans_diet_of_whoppers_20120831/">Eugene Robinson</a> who attacked his speech. I think it is deceptive because it is selective and ignores examples where recovery is in sight. At the same convention that Ryan made his comments, New Jersey governor Chris Christie called California governor Jerry Brown an "old retread," but in California 365,000 jobs were gained in the 12 months ending in July, 2012. Our state has grown jobs twice as fast as the nation – 2.6 percent
vs. 1.3 percent. <a href="http://tinyurl.com/98kc2mg">"Professional, scientific, technical and information services added 60,000 jobs....In the second quarter of 2012, more venture capital was invested in California-based companies than in the other 49 states combined."</a> California and the nation are a long way from full recovery, but to imply that recovery is "nowhere in sight" in some places means that there's no recovery under Obama is false. On the other hand, I don't think Ryan made stuff up so much as he left stuff out. I don't think he confabulated so much as engaged in <a href="http://59ways.blogspot.com/2012/03/false-implication.html">false implication</a>.<br />
<br />
<div style="text-align: center;">
-----</div>
<div style="color: black; font: 10pt sans-serif; height: 1px; overflow: hidden; text-align: left; text-transform: none; width: 1px;">
ead more here: http://www.sacbee.com/2012/08/31/4774014/california-bashers-are-missing.html#storylink=cpy</div>
<div style="color: black; font: 10pt sans-serif; height: 1px; overflow: hidden; text-align: left; text-transform: none; width: 1px;">
<br />
Read more here: http://www.sacbee.com/2012/08/31/4774014/california-bashers-are-missing.html#storylink=cpy</div>
<br />
<a href="http://www.lucs.lu.se/choice-blindness-group/">Studies on what is now called “choice blindness”</a> demonstrate that often people who make stuff up aren’t even aware of it and really believe the falsehoods they utter. Researchers showed males two pictures of female faces and asked them which one they found more attractive. The men were then asked why they chose the one they did. The photos were then turned face down and a trick was played on the subjects. One of the photos was turned over and sometimes the photo turned over was not the one the male selected. Yet, in a majority of the trials the subject didn’t even notice the switch and proceeded to provide details as to why he selected the one he didn’t actually select. The majority of subjects are known to have confabulated. But it is possible that they all did. <br />
<br />
<a href="http://www.amazon.com/exec/obidos/ISBN=0521284147/skepticality-20/">Daniel Kahneman and Amos Tversky</a> are famous for having discovered that many of us answer an easier question than the one that is posed. These subjects were asked which female face they found more attractive. As far as I know, there was no attempt on the part of the researchers to discover what criteria the subjects would use to determine how they measure attractiveness in females. It is likely that most of us have no problem deciding whether we find a person attractive, but how many of us have ever reflected on the criteria we use in making that decision? If there is just one photo to look at, most of us would instantly decide whether the face is attractive. But would we know why we feel the way we do? Our brain must have gone through some sort of decision-making process in an instant. What data our brain was using to arouse our feelings is unknown to us at the moment we decide that the face is or isn’t attractive. The same would be true for making a comparison between two faces. We might do it instantly and there is no way we could be conscious of the criteria our brain is using to drive our feelings. So, when asked why we find face A more attractive than face B, we make stuff up. <br />
<br />
For all we know, when asked which face is more attractive, we answer not that question but another one such as “which girl would I want to kiss” or “which girl looks friendlier” or “which girl would be more likely to find me attractive.” Yet, when we give our reasons for our choice to the experimenter, we may say things like “She has a lovely smile. Her hairdo is very nice. She looks like she’d be fun to party with. She reminds me of some actress I like.” The actual reasons for our choice may or may not coincide with what we say, and we usually have no way of knowing whether we’re telling the truth even though we believe we are. We might state what we think a man should say when describing a woman as attractive rather than state or even know why we really find one face more attractive than another. <br />
<br />
The researchers who did the study on face choices also did a study called “Magic at the Marketplace: Choice Blindness for the Taste of Jam and the Smell of Tea.” Many people had no problem explaining why they favored a jam even though when given a second taste of what they thought was the one they selected, it wasn’t the one they'd selected. Several other studies have found that confabulation is rather common among us ordinary folk who have not yet been diagnosed with a brain disorder. <br />
<br />
Perhaps we make up stories that seem plausible to us, even though we don’t really have a clue as to their accuracy, for the same reason that Mr. Thompson did. We confabulate to make sense out of our experience, our feelings, our perceptions, and our memories. Unlike Mr. Thompson, though, most of us have brains that can access vast quantities of data in an instant, but <a href="http://www.amazon.com/exec/obidos/ISBN=0307378217/roberttoddcarrolA/">these brain processes are taking place below the level of consciousness</a>. We’re often not really aware of why we’re constructing the stories we do. <br />
<br />
It may be hard to believe but the evidence is overwhelming that we don’t know ourselves as well as we think we do.Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com6tag:blogger.com,1999:blog-8778506545780706793.post-25817182793160262232012-08-27T06:42:00.003-07:002012-08-28T07:14:51.678-07:00confirmation biasConfirmation bias refers to a type of <a href="http://www.skepdic.com/selectiv.html">selective thinking</a> whereby one tends to notice and look for what confirms one's beliefs, and to ignore, not look for, or undervalue the relevance of what contradicts one's beliefs. For example, if you believe that during a full moon there is an increase in admissions to the emergency room where you work, you will take notice of admissions during a full moon but be inattentive to the moon when admissions occur during other nights of the month. A tendency to do this over time unjustifiably strengthens your belief in the relationship between the full moon and accidents and other <a href="http://www.skepdic.com/fullmoon.html">lunar effects</a>. <br />
<br />
This tendency to give more attention and weight to data that support our beliefs than we do to contrary data is especially pernicious when our beliefs are little more than prejudices. If our beliefs are firmly established on solid evidence and valid confirmatory experiments, the tendency to give more attention and weight to data that fit with our beliefs should not lead us astray as a rule. Of course, if we become blinded to evidence truly refuting a favored hypothesis, we have crossed the line from reasonableness to closed-mindedness. <br />
<a name='more'></a><br />
<br />
Numerous studies have demonstrated that people generally give an excessive amount of value to confirmatory information, that is, to positive or supportive data. The "most likely reason for the excessive influence of confirmatory information is that it is easier to deal with cognitively" (<a href="http://www.amazon.com/exec/obidos/ISBN=0029117062/roberttoddcarrolA/">Thomas Gilovich, <i>How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life</i></a>). It is much easier to see how data support a position than it is to see how they might count against the position. Consider a typical <a href="http://www.skepdic.com/esp.html">ESP</a> experiment or a seemingly <a href="http://www.skepdic.com/dreams.html">clairvoyant dream</a>: Successes are often unambiguous or data are easily massaged to count as successes, while negative instances require intellectual effort to even see them as negative or to consider them as significant. The tendency to give more attention and weight to the positive and the confirmatory has been shown to influence <a href="http://www.skepdic.com/memory.html">memory</a>. When digging into our memories for data relevant to a position, we are more likely to recall data that confirms the position. <br />
<br />
Researchers are sometimes guilty of confirmation bias by setting up experiments or framing their data in ways that will tend to confirm their hypotheses. They compound the problem by proceeding in ways that avoid dealing with data that would contradict their hypotheses. For example, some <a href="http://www.skepdic.com/parapsy.html">parapsychologists</a> used to engage in <a href="http://www.skepdic.com/opstart.html">optional starting and stopping</a> in their <a href="http://www.skepdic.com/esp.html">ESP</a> research. Experimenters might avoid or reduce confirmation bias by collaborating in experimental design with colleagues who hold contrary hypotheses, as<a href="http://www.blogger.com/goog_1401104278"> </a><a href="http://www.richardwiseman.com/resources/twominds.pdf"> Richard Wiseman (skeptic) and Marilyn Schlitz (proponent)</a> have done. We have to continually remind ourselves of this tendency and actively seek out data contrary to our beliefs. Since this is unnatural, most of us are doomed to die with our biases on.<br />
<br />
To counteract the natural tendency to try to confirm our beliefs, science has developed methods of testing claims that involve trying to <i>falsify </i>them, rather than trying to confirm them. <a href="http://www.skepdic.com/paranormalinvestigator.html">Paranormal investigators</a> who set out to prove ghosts haunt some hotel or ancient castle aren't being very scientific, no matter how many electronic gizmos they carry in their toolkit. The scientific paranormal investigator approaches an investigation with an open mind, collects and examines as much relevant evidence as is reasonable for the claim being investigated, develops hypotheses (alternative explanations), and tries to falsify them. Yes, a scientist tries to falsify, not verify, his hypothesis. If you set out to verify your hypothesis you are very likely to be misdirected by confirmation bias. You will look for only those things that confirm what you believe and you will systematically ignore those things that might disconfirm your belief. To keep an open mind, the scientist, like a good detective, must not form hypotheses too early in the investigation, as the tendency of all of us is to confirm, not disconfirm, our hypotheses. Unless you are lucky and your first guess happens to be the right one, you run the risk of building up a convincing case for a false claim. The study of <a href="http://www.skepdic.com/refuge/funk58.html">criminal profilers,</a> <a href="http://www.skepdic.com/medium.html">psychics</a>, and <a href="http://www.skepdic.com/psychdet.html">psychic detectives</a> offer examples of how confirmation bias works: colleagues, the media, and gullible law enforcement officers focus on anything that seems to confirm the profile the investigator is working with or the prediction of the psychic, while ignoring all the claims that were irrelevant to that profile or made no sense in light of the prediction. The importance of trying to collect data that is relevant to the investigation in such a way that one's biases don't lead one to ignore important avenues of investigation cannot be overemphasized.<br />
<br />
Those who favor their interpretations of <a href="http://skepdic.com/essays/evaluatingexperience.html">personal experience</a> over the results of double-blind, randomized, control group studies are doomed to die with their biases on. One way to counteract confirmation bias is to consciously seek out literature that opposes your beliefs and hang around with people who don't share your cherished opinions. To do so, however, is so unnatural that very few people will do it. I can attest that one of the most tedious tasks I ever set for myself was to read the works of <a href="http://www.skepdic.com/refuge/radin1.html">Dean Radin</a>, <a href="http://www.skepdic.com/refuge/afterlife.html">Gary Schwartz</a>, <a href="http://www.skepdic.com/refuge/tart.html">Charles Tart</a>, <a href="http://www.skepdic.com/refuge/hubbard.html">L. Ron Hubbard</a>, and the like. On the other hand, my graduate training in philosophy required me to read the likes of St. Thomas Aquinas, St. Augustine, Bishop George Berkeley, and many others whose ideas I could never agree with. Had I been allowed to read only Hume or other philosophers I agreed with, my education would have been an impoverished one. Robert Todd Carrollhttp://www.blogger.com/profile/02865938081392957563noreply@blogger.com6