Autism's False Prophets: Bad Science, Risky Medicine, and the Search for a Cure

CHAPTER 10. Science and Society

There is nothing to fear except the persistent refusal to find out the truth.

—DOROTHY THOMPSON

Science is influenced by society. In the fourth century B.C. two Greek philosophers, Plato and Aristotle, believed the earth was the center of the universe (geocentrism). By the Middle Ages, everyone believed it. Further support for the theory of geocentricism came from Christian biblical references, such as “The world is firmly established; it cannot be moved” (Psalm 93:1, Psalm 96:10, and Chronicles 16:30) and “[The Lord] set the earth on its foundations” (Psalm 104:5). As a consequence, geocentrism assumed the power of religious dogma.

In 1543, however, Copernicus, a Polish mathematician, challenged the notion of geocentrism by claiming the earth revolved around the sun, not the other way around. Few believed him. But in December 1610, one year after the invention of the telescope, the Italian astronomer Galileo proved Copernicus was right. Galileo showed that Venus exhibited a full set of phases similar to the moon, a phenomenon that could have been possible only if Venus, like the earth, rotated around the sun. Galileo published his findings in his Dialogue Concerning the Two Chief World Systems. But Galileo’s work didn’t sit well with the Roman Catholic Church, and in 1633, the papacy accused him of heresy. The trial didn’t last very long. Church officials ruled, “The proposition that the sun is in the center of the world and immovable from its place is absurd, philosophically false, and formally heretical, because it is expressly contrary to Holy Scriptures.” The Church banned Galileo’s offending book, forbade publication of his future works, and ordered him imprisoned for the rest of his life. But Galileo knew he was right; as he was led away from his Roman inquisitors, he muttered, referring to the earth: “Eppur si muove” (And yet it moves).

Galileo’s science didn’t fit the culture of his time, so he was denounced for it. Today is no different. On October 5, 1999, Dan Burton appeared on the CNN program Talk Back Live with Bobbie Battista. Burton believed the MMR vaccine had caused his grandson’s autism, and he planned to use his position as the chairman of the Committee on Government Reform to prove it. During the broadcast, a doctor on the program challenged Burton by describing a recent study by British epidemiologist Brent Taylor that contradicted Burton’s theory. Burton was incensed. “At the New Jersey Conference, Stop Autism Now, there were twelve hundred parents,” he said. “And they were asked the question ‘Do you believe that the autism of your child was caused by vaccines?’ Seven hundred and fifty raised their hands!”

Burton considered common belief to be common wisdom. If most people believed vaccines caused autism, then vaccines caused autism. Later, when Brent Taylor presented his data in front of Burton’s congressional committee, Burton denounced him in much the same way the Church had denounced Galileo. But instead of using Holy Scriptures to make his case, Burton used the weapon of his time: conflict of interest. He accused Taylor and others of being unduly influenced by the federal government and pharmaceutical companies. “We have been checking into all the financial records,” said Burton, “and we are finding some possible financial conflicts.” Burton, appealing to the prevalent notion that everyone is in someone’s pocket, implied that because of these unseen influences, Taylor’s study should be discarded. “We are slipping into a new form of darkness,” wrote Steven Milloy, “one where it’s popular, profitable and politically expedient to suppress science.”

096

IN A CULTURE DOMINATED BY CYNICISM AND HUNGRY FOR SCANDAL, many people believe that doctors, scientists, and public health officials cater to a pharmaceutical industry willing to do anything—including promote dangerous vaccines—for profit. So it’s not hard to appeal to the notion that pharmaceutical companies are evil. During the breast implant controversy, one patient advocate said, “First, let’s get over the myth that just because Harvard or the Mayo Clinic or Yale says that something is correct, that it is correct. We know where their bread is buttered. We know who gives the funding. Manufacturers fund; scientists do the studies.” Comments made during the vaccine-autism controversy were no different. “I’m a patriot,” said Boyd Haley. “But the thing I find very discouraging about our government is that we’re more interested in protecting the income of professionals and the pharmaceutical industry than in protecting the American people.”

Current movies also reflect this sentiment. In The Constant Gardener, released in 2005, a pharmaceutical company makes an antibiotic that is highly effective against multidrug-resistant tuberculosis. When the drug is found to have a fatal side effect, the company buries its victims in a mass grave outside of town and kills others who know about the problem, including the sympathetic wife of a government official. In The Fugitive, released in 1993, a pharmaceutical company hires a one-armed man to kill a doctor (Richard Kimble) when he finds that one of the company’s drugs, nearing FDA approval, causes fatal liver damage. Neither the screenwriters nor the public considered these two scenarios implausible. Viewers were perfectly willing to believe that pharmaceutical companies hire hit men to kill people.

To some extent, pharmaceutical companies have brought this upon themselves. Twenty years ago, direct-to-consumer advertising of prescription medicines was uncommon. Now television viewers encounter a barrage of advertisements from pharmaceutical companies showing that medicines can be miraculous; people with allergies run comfortably through pollen-filled fields; and women skate effortlessly despite joint pain. Also, the types of drugs that are being made have started to change: more research dollars are being spent to develop lifestyle products, like those to combat impotency or hair loss. It’s hard to argue the special place of an industry in society when it’s hawking yet another potency product. Companies are starting to look like snake oil salesmen.

And it’s not just the unseemliness of promoting lifestyle products that hurts pharmaceutical companies; some marketing practices have clearly evolved from aggressive to unethical. As a consequence, we don’t trust pharmaceutical companies. Nor do we trust the doctors or scientists who work with them. Kenneth Rothman, an epidemiologist from Boston University, calls this “the new McCarthyism.” Most people assume that investigators who have received research support from pharmaceutical companies cannot have an unbiased view. But where is the evidence for this in the vaccine-autism story? There exists not one example of a scientist or doctor serving on a vaccine advisory committee who, acting in his or her own financial interest, knowingly gave bad advice. In fact, after recommending vaccines for the nation’s children, policymakers at the CDC—some of whom have performed studies funded by vaccine makers and are, therefore, closest to the data—invariably give these vaccines to their own children and grandchildren.

Although those who claim that vaccines cause autism have been quick to point out conflicts of interest among the scientists and doctors who disagree with them, few of the parent advocates, politicians, or scientists who speak against vaccines are without conflicts. Lyn Redwood, cofounder of Safe Minds, sued the federal government for compensation. So did Representative Dan Burton’s daughter, Danielle Burton-Sarkine. Both Redwood and Burton stood to financially benefit—either directly or indirectly—from the public’s perception that vaccines cause autism. Robert F. Kennedy Jr. has a direct relationship with one of the largest product-liability law firms in the United States. Vijendra Singh, who testified at a Burton hearing that the MMR vaccine caused autoimmunity, received support from the Vaccine Autoimmunity Project. Richard Deth and Mady Hornig, both of whom claimed they had found, in their laboratories, how thimerosal caused autism, received funding from Safe Minds. And Andrew Wakefield received more than $800,000 from a personal-injury lawyer representing parents who were suing pharmaceutical companies.

So, if everyone appears to be in someone’s pocket, who or what can be trusted? How can people best determine if the results of a scientific study are accurate? The answer is threefold: transparency of the funding source, internal consistency of the data, and reproducibility of the findings.

People have the right to know the funding source for scientific papers. For example, when Andrew Wakefield published his study of autistic children in the Lancet, he should have acknowledged that he had previously received money from Richard Barr and that Barr represented some of these children in a lawsuit against pharmaceutical companies. The irony in Andrew Wakefield’s case was that not only did he fail to inform the Lancet’s readership of his funding source, but he failed to inform his coinvestigators, most of whom later withdrew their names from his paper. Although funding sources should be reported in every scientific paper, they’re probably the least important factor in judging a study’s worth or reliability.

More important are the strength and internal consistency of the data. When Richard Horton found that Andrew Wakefield had received funds from a personal-injury lawyer, he was outraged. But Horton’s anger should have been aimed at the obvious weaknesses in Wakefield’s paper, not at his perceived motives. Andrew Wakefield had proposed that measles vaccine damaged children’s intestines, allowing entrance of harmful toxins that caused autism. It was a hypothesis for which Wakefield offered not one shred of scientific evidence. Wakefield’s paper shouldn’t have been published not because he had received funds from a personal-injury lawyer but because his assertions were based on flimsy, poorly conceived science.

Probably the most important aspect of determining whether a scientific assertion is correct is the reproducibility of its findings. Superb, reproducible studies have been funded by pharmaceutical companies and poor, irreproducible studies have been funded independently, and vice versa. In the end, it doesn’t matter who funds a scientific study. It could be funded by pharmaceutical companies, the federal government, personal-injury lawyers, parent advocacy groups, or religious organizations. Good science will be reproduced by other investigators; bad science won’t.

Although the story of Andrew Wakefield and the MMR vaccine is an excellent example of the importance of reproducibility in assuring the validity of a scientific study, no story is more dramatic or more instructive than one that began at 1:00 p.m. on March 23, 1989. That’s when Stanley Pons and Martin Fleischmann, nuclear physicists working at the University of Utah, announced they had caused nuclear fusion in a test tube. Pons and Fleischmann claimed they had taken a palladium electrode, inserted it into heavy water (deuterium), and observed a fusion event (when two lighter nuclei fuse to form a larger nucleus, releasing energy). This was big news. Pons and Fleischmann had found a way to provide safe, inexpensive, limitless energy. The media ate it up. Jerry Bishop, a superb science reporter, wrote about it in a front-page story in the Wall Street Journal. Utah legislators were so proud the breakthrough had occurred at the University of Utah that they allocated more than $4 million to establish the National Cold Fusion Institute on the university campus. Most scientists, however, were immediately skeptical, and for good reason: the Pons-Fleischmann experiment violated the first law of thermodynamics, which states that one can’t get more energy out of something than is put into it. Later, when seventy different groups of physicists failed to find what Pons and Fleischmann had found, the promise of cold fusion disappeared. The building that housed the National Cold Fusion Institute now stands as a literal monument to irreproducible science.

097

OTHER ASPECTS OF OUR CULTURE ALSO DETERMINE HOW PEOPLE process scientific information. During the past few decades, doctors have started to treat patients differently. No longer do they always take a paternalistic, I-know-what’s-best-for-you-so-don’tworry approach. Doctors are more apt to encourage patients to actively participate in their own medical care. And nothing has empowered people more than the Internet. Now patients have ready access to a wealth of information about health, medicine, and science. During a recent segment on the Oprah Winfrey Show, a celebrity mother was asked where she had gotten her medical information. “I attended the University of Google,” she replied. J. A. Muir Gray, a British researcher and author of The Resourceful Patient, celebrates the culture of shared expertise. “In the modern world,” he said, “medicine was based on knowledge from sources from which the public was excluded—scientific journals, books, journal clubs, conferences, and libraries. Clinicians had more knowledge than patients mainly because patients were denied access to knowledge. The World Wide Web, the dominant medium of the post-modern world, has blown away the doors and walls of the locked library.” When Lyn Redwood and Sallie Bernard searched the medical literature for clues to the causes of autism, they were doing only what many doctors encourage parents to do: participate in the care of their children.

But empowering parents to make medical decisions comes with a price. Information on the Internet is typically unfiltered—anyone can say anything, and health advice can be terribly misleading. The vaccine-autism controversy is a good example. Doctors now constantly encounter parents who don’t want to give their children MMR or thimerosal-containing influenza vaccines, fearing they might cause autism. “I’ve done my research,” parents will say, “and I don’t want my child to have that shot.” By “research,” the parents usually mean that they have perused a variety of Web sites on the Internet. But that’s not research. If parents want to do genuine research on the subject of vaccines, they should read the original studies of measles, mumps, and rubella vaccines; compare them with studies of the combined MMR vaccine; and analyze the ten epidemiological studies that examined whether MMR caused autism. If they want to research thimerosal, they should read the hundred or so studies on mercury toxicity, as well as the eight epidemiological studies that examined whether thimerosal caused harm. This would take a lot of time. And few parents have the background in statistics, virology, toxicology, immunology, pathogenesis, molecular biology, and epidemiology required to understand these studies. Instead, they read other people’s opinions about them on the Internet. Parents can’t be blamed for not reading the original studies; doctors don’t read most of them either. And frankly, few doctors have the expertise necessary to fully understand them, so they rely on experts who collectively have that expertise.

The experts who are responsible for making vaccine recommendations in the United States, and for determining whether vaccines are safe, serve the CDC, the AAP, the American Academy of Family Physicians, and the National Vaccine Program Office. And they do a pretty good job. During the past century, vaccines have helped to increase the life span of Americans by thirty years, and they have a remarkable record of safety. But if you’re looking for a quote guaranteed to anger the American public, you need look no further than one delivered by Congressman Henry Waxman during Dan Burton’s hearings. “Let us let the scientists explore where the real truth may be,” said Waxman. In other words, let the experts figure it out.

Waxman’s plea doesn’t have much traction in today’s society. Because of the Internet, everyone is an expert (or no one is). As a consequence, for some, there are no truths, only different experiences and different ways of looking at things. “This is the way that the world is going,” laments Richard Smith, editor of the prestigious British Medical Journal, in an article titled “The Discomfort of Patient Power.” “It’s called post-modernism. There is no ‘truth’ defined by experts. Rather, there are many opinions based on very different views and theories of the world. Doctors, governments, and even the British Medical Journal might hanker after a world where their view is dominant. But that world is disappearing fast.”

If doctors are going to encourage patients to make their own choices, they have to be willing to stand back and watch them make bad ones. They can’t have it both ways. “Patients will often choose to ignore their doctors’ advice and do something that their doctors regard as odd, even crazy,” writes Richard Smith. Michael Fitzpatrick, the author of MMR and Autism, also sees danger in a culture in which experts cede their expertise. “We need to establish the foundations of an informal contract between parents and professionals that respects both our different spheres of expertise and—most importantly—the distinctions between them. Doing the best for our children means concentrating on being parents and leaving science to the scientists, medicine to the doctors, and education to the teachers.” Fitzpatrick realizes that his request flies in the face of modern parenting. “So influential has the rhetoric of anti-paternalism become,” says Fitzpatrick, “that this now appears a hopelessly old-fashioned proposal. But it is both principled and pragmatic. If I am having trouble with my car, I do not take to the Internet to study motor engineering; I take it to the garage and ask a mechanic to repair it. Even though I do not understand his explanation of the problem, I trust him. In a similar way, we put our trust in numerous people we encounter in our everyday lives. If we did not, society would simply collapse. The peculiarity of our current predicament is the selective withdrawal of trust from scientific and medical professionals, which is both unjustified and mutually damaging.”

For many parents, the advice given by health care professionals about vaccines is just one more opinion in a sea of opinions offered on the Internet.

098

WHEN MARK AND DAVID GEIER AND DEFEAT AUTISM NOW PROPOSED to treat autistic children with mercury chelation, Lupron, restricted diets, and antibiotics—therapies not supported by any rigorous scientific studies—they were appealing to a long-standing, prevalent aspect of our culture: the lure of alternative medicine. As science reveals more and more about the workings of nature, this attraction hasn’t weakened. If anything, it’s gotten stronger. More than 60 million Americans use supplements, megavitamins, herbs, and other alternative therapies in what has become a $40-billion-a-year industry.

America’s commitment to alternative medicines is so strong that it led to the creation of a branch within NIH. In 1991, Congress passed a bill to create the Office of Alternative Medicine. Seven years later, this office became the National Center for Complementary and Alternative Medicine (NCCAM). Tom Harkin, a popular senator from Iowa, promoted the legislation. Harkin had been influenced by fellow Iowan Berkeley Bedell, who was convinced that his Lyme disease had been cured by eating special whey from Lyme-infected cows. Bedell’s wasn’t the only anecdote that convinced Harkin to carve out a special place on the NIH agenda. He, too, had gone against the advice of his doctors with amazing results: his allergies had been virtually eliminated by eating bee pollen. The center that now spends millions of dollars to study alternative medicines was launched by these two experiences.

Although medicines that are alternative in today’s culture may not be embraced by traditional, Western-trained doctors, that doesn’t mean they don’t work; it might mean only that they haven’t been tested yet. But what worried many scientists and physicians about NCCAM was that alternative medicines would be exempt from the scientific method. Fortunately, that hasn’t happened.

099

ALTHOUGH IT’S BEEN AROUND FOR ABOUT 500 YEARS, THE SCIENTIFIC method is foreign to many. That’s because most people don’t understand what science is and what it isn’t. People think of science as a body of knowledge or scientific societies or scientists. But it’s really just a way of thinking about a problem. Indeed, most of us use the scientific method during routine activities. For example, if a radio doesn’t work, we formulate a series of hypotheses: it isn’t plugged in; the battery is dead; it isn’t tuned to a local station. Then we go about testing each of these hypotheses separately until we have an answer. This is the scientific method—isolating one variable at a time and testing it.

100

USING THE SCIENTIFIC METHOD, RESEARCHERS FUNDED BY NCCAM have now tested several alternative medicines. They have found that glucosamine and chondroitin sulfate don’t treat arthritis; saw palmetto doesn’t treat enlarged prostates; St. John’s wort doesn’t treat depression; shark cartilage doesn’t treat cancer (based on the false belief that sharks don’t get cancer); and Laetrile doesn’t treat leukemia. Worse still, these studies have revealed a frightening aspect of alternative medicines: they can be quite dangerous. One natural alternative called compound Q, derived from the Chinese cucumber, was used to treat AIDS patients desperate for a cure. After it was found to cause severe toxic reactions and coma, compound Q was abandoned.

The lure of alternative medicines is understandable. When doctors fail to offer a cause or a cure for a particular disease, purveyors of alternative medicines often step into the void. Jerome Groopman, a professor of medicine at Harvard Medical School and the author of How Doctors Think, suffered severe, unrelenting pain in his wrist. “When I was a patient with a serious problem of uncertain outcome, I felt the powerful temptation to seek a magical solution,” said Groopman. “Most doctors are sympathetic to this sensibility. But a good doctor distinguishes magic from medicine.” For some, however, science is only an intrusion into beliefs that are as strong as religious convictions. Even when a particular notion is consistently refuted by scientific studies, they refuse to abandon it. During one of Dan Burton’s hearings, a clinician named Kathy Pratt, who took care of autistic patients, was convinced that vaccines were the culprit “regardless of what the research tells us.” Because science is the only discipline that enables one to distinguish myth from fact, Pratt’s statement was particularly unsettling. “Uncovering [the laws of nature] should be the highest goal of a civilized society,” says physicist Robert Park. “Not because scientists have a greater claim to a greater intellect or virtue, but because the scientific method transcends the flaws of individual scientists. Science is the only way we have of separating truth from ideology or fraud or mere foolishness.” And science is enormously open-minded. If people believe they have a treatment for a particular disease or that one thing causes another, the scientific method can determine whether they are right. Suspected causes will be found to be true or not, and therapies will be found to work or not. “Things that are wrong are ultimately set aside and things that are right gain traction,” said Stephen Strauss, former director of NCCAM. Strauss had a framed quotation on the wall of his office: “The plural of anecdotes is not evidence.”

Although science is open-minded, the scientific method isn’t terribly politically correct. To determine whether a medicine works, scientists establish a hypothesis, formulate burdens of proof, and subject those burdens to statistical analysis. Over time, a truth emerges. Something is either true or it isn’t. And although our instinct is to be open to a wide range of attitudes and beliefs, there comes a time when it becomes clear that certain beliefs just don’t hold up. MMR and thimerosal don’t cause autism, and secretin, chelation therapy, and Lupron don’t cure it.

101

ALTHOUGH THE SCIENTIFIC METHOD HAS ALMOST SINGLE-HANDEDLY brought us out of the Dark Ages and into the Age of Enlightenment, it can be difficult to explain how it works. Here’s the problem. In determining whether, for example, MMR causes autism, investigators form a hypothesis. The hypothesis is always formed in the negative, known as the null hypothesis. In the MMR-causes-autism case, the hypothesis would be, “MMR does not cause autism.” Epidemiological studies have two possible outcomes: (1) Investigators might generate data that reject the null hypothesis. Rejection would mean that the risk of autism was found to be significantly greater in children who received MMR than in those who didn’t. (2) Investigators might generate data that do not reject the null hypothesis. In this case, the risk of autism would have been found to be statistically indistinguishable in children who did or didn’t receive MMR. But there is one thing those who use the scientific method cannot do; they cannot accept the null hypothesis. In other words, scientists can never say never. This means that scientists can’t prove MMR doesn’t cause autism in absolute terms because the scientific method allows them to say it only at a certain level of statistical confidence.

An example of the problem with not being able to accept the null hypothesis can be found in an experiment some children might have tried after watching the television show Superman. Suppose a little boy believed that if he stood in his backyard and held his arms in front of him (using Superman’s interlocking thumb grip), he could fly. He could try once or twice or a thousand times. But at no time would he ever be able to prove with absolute certainty that he couldn’t fly. The more times he tried and failed, the more unlikely it would be that he would ever fly. But even if he tried to fly a billion times, he wouldn’t have disproved his contention; he would only have made it all the more unlikely. When scientists try to explain to the public the results of their studies, they always have this limitation in the back of their minds. They know the scientific method does not allow them to say, “MMR doesn’t cause autism.” So they say something like, “All of the evidence to date doesn’t support the hypothesis that MMR causes autism.” But to parents who are more concerned about autism (which they see and read about) than measles (which occurs uncommonly in the United States), this equivocation is hardly reassuring.

Another example of how scientists, respectful of the limits of the scientific method, fail to reassure the public can be found in a 2001 report from the Institute of Medicine (IOM) on the MMR vaccine and autism. This report, written after several excellent studies showed no relationship between the vaccine and the disorder, stated, “The committee notes that its conclusion does not exclude the possibility that MMR vaccine could contribute to autistic spectrum disorder in a small number of children.” Those who wrote this report failed to point out that no study could ever prove MMR didn’t cause autism in a small number of children because the scientific method would never allow it. But parents saw a door left open, and it scared them. Dan Burton picked up on this statement in one of his tirades against the IOM: “You put out a report to the people of this country saying that [the MMR vaccine] doesn’t cause autism and then you’ve got an out in the back of the thing,” screamed Burton. “You can’t tell me under oath that there is no causal link, because you just don’t know, do you?”

102

ANOTHER CHALLENGE FOR THOSE COMMUNICATING SCIENCE TO the public is explaining the difference between coincidence and causality. Because we’re always looking for reasons for why things happen, this isn’t easy.

When Andrew Wakefield reported the stories of eight children with autism whose parents first noticed problems within one month of their children’s receiving MMR, he was observing something that statistically had to happen. At the time, 90 percent of children in the United Kingdom were getting the vaccine, and one of every 2,000 was diagnosed with autism. Because MMR is given soon after a child’s first birthday, when children first acquire language and communication skills, it was a statistical certainty that some children who got MMR would soon be diagnosed with autism. In fact, it would have been remarkable if that hadn’t happened. But parents of autistic children perceived their children were fine, got the MMR vaccine, and weren’t fine anymore. (Although most children with autism show problems very early in life, about 20 percent will develop normally and then regress. It was this regression during the second year of life that caused some parents to blame MMR.) “Humans evolved the ability to seek and find connections between things and events in the environment,” says Michael Shermer, author of Why People Believe Weird Things. “Those who made the best connections left behind the most offspring. We are their descendents. The problem is that causal thinking is not infallible. We make connections whether they are there or not.” Physicist Robert Park agrees. “In humans, the ability to discern patterns is astonishingly general,” he said. “Indeed, we are driven to seek patterns in everything our senses respond to. So far, we are better at it than the most powerful computer, and we derive enormous pleasure from it. So intent are we on finding patterns, however, that we often insist on seeing them even when they aren’t there, like constructing shapes from Rorschach blots. The same brain that recognizes that tides are linked to phases of the moon may associate positions of the stars with impending famine or victory in battle.”

For many parents, the association in time between their children’s receipt of vaccines and the appearance of autism is far more convincing than epidemiological studies. That’s because anecdotal experiences can be enormously powerful. Here’s another example. A pediatrician in suburban Philadelphia was preparing a vaccine for a four-month-old girl. While she was drawing the vaccine into the syringe, the child had a seizure lasting several minutes. But imagine what the mother would have thought if the pediatrician had given the vaccine five minutes earlier. No amount of statistical data showing that the risk of seizures was the same in vaccinated or unvaccinated children would have ever convinced her that the vaccine hadn’t caused the seizure. People are far more likely to be swayed by a personal, emotional experience than by the results of large epidemiological studies. “Popular induction depends upon the emotional interest of the instances,” said philosopher Bertrand Russell, “not upon their number.”

Several years ago, a stand-up comedian, imitating a television commercial advertising a book about the occult, showed how hard it can be to distinguish cause from coincidence. Deepening his voice, he said, “A woman in California burns her hand on a stove. Her mother, three thousand miles away, feels pain in the same hand at the same time. Coincidence?” Here he paused for several seconds. “Yes!” he yelled, exasperated. “That’s what coincidence is!”

103

ANOTHER ASPECT OF THE CURRENT CULTURE THAT MAKES IT DIFFICULT to communicate science is the astonishing prevalence of beliefs rooted in medieval times. “Two hundred years ago educated people imagined that the greatest contribution of science would be to free the world from superstition and humbug,” wrote Robert Park. “It has not happened. Ancient beliefs in demons and magic still sweep across the modern landscape.” According to a Gallup poll conducted in 1991, the statistics are grim. About 50 percent of Americans believe in astrology, 46 percent in extrasensory perception, 19 percent in witches, 22 percent in aliens who have already landed on earth, 33 percent in the lost continent of Atlantis, 41 percent in the notion that dinosaurs and humans lived on earth at the same time (movies haven’t helped with this one), 42 percent in communication with the dead, and 35 percent in ghosts. Thousands of people still flock to Delphi, Greece, every year to gain energy from a place they consider to be the center of the earth (based on the ancient belief that the earth was flat and could therefore have a center on its surface).

This is the lay of the land for scientists trying to explain cause and effect to the public.

104

YET ANOTHER, MORE SUBTLE, ASPECT OF OUR CULTURE APPEARS throughout the vaccine-autism controversy. Two years after he published his paper in the Lancet claiming that MMR caused autism, Andrew Wakefield published “Measles, Mumps, Rubella Vaccine: Through a Glass Darkly.” The phrase “through a glass darkly” is taken from Saint Paul’s letter to the Corinthians (1 Corinthians 13:12) and refers to man’s imperfect perception of reality. Wakefield’s implication was that science—in this case the science that had claimed MMR was safe before licensure—couldn’t be relied upon to get it right. And Wakefield believed that scientific studies that continued to absolve MMR couldn’t be trusted to get it right either. Wakefield’s capacity to set aside the studies that disproved his theory was based on a belief as powerful as a religious conviction. As told to Brian Deer of the Sunday Times of London, “He’s very much like my father,” said Wakefield’s mother, Bridget. “If he believed in something, he would have gone to the ends of the earth to go on believing.” When Andrew Wakefield first left England, he landed in Melbourne, Florida, with the Good News Doctor Foundation, whose logo features a stethoscope sitting on top of a Bible. The foundation describes itself as “a Christian ministry that provides hope and information on how to eat better and feel better, and minister more effectively as a result of a biblically based, healthy lifestyle.” For Andrew Wakefield, the question of whether MMR caused autism had moved into the realm of faith.

While Andrew Wakefield continues to make religious references as he exhorts listeners to believe his theories, the most prominent religious figure in the vaccine-autism controversy is Lisa Sykes, an associate pastor at the Welborne United Methodist Church in Richmond, Virginia. Sykes, who believes her son’s autism was caused by thimerosal in vaccines, often delivers fiery speeches denouncing scientists at the CDC, FDA, and IOM, calling them “modern day deceivers.” In April 2006, during an anti-vaccine demonstration in Washington, D.C., Sykes led those gathered in prayer “for the greedy and those who love power so much that they would seek profit over safety, and sacrifice children instead of wealth. We pray for those who have surrendered the truth, and government officials who have failed to seek it. These, too, like so many of our injured children, cannot see, they cannot hear, and they remain silent.” In February 2007, the United Methodist Virginia Conference published its Lenten Devotional, in which Sykes interpreted scripture and issued a call to action: “My son is disabled,” she said, “unnecessarily injured by mercury he received in vaccines. Like Abram, we are cast down. The era of administered mercury is the darkest part of the night.”

105

Protesters hold a sign in support of Andrew Wakefield during a hearing before the General Medical Council on charges of misconduct. “Suffer little children unto us” paraphrases Matthew 19:14, in which Jesus rebukes his disciples for turning away a group of children. GlaxoSmithKline, Merck, and Aventis Pasteur are the pharmaceutical companies that manufacture MMR vaccine for children in the United Kingdom (courtesy of Getty Images).

“I think about symbols,” said Kathleen Seidel, in reference to cleansing the autistic child’s body of mercury. “And there are a lot of powerful symbols that are part of this whole hysteria, the whole concern over vaccines: symbols of purity and defilement and of sin and redemption.” One of the Rescue Angels of Generation Rescue (the organization dedicated to mercury chelation) proclaimed that with chelation, “We’re helping [a child’s] body do what God intended it to do.” Where science and medicine have failed to find a cause or cure for autism, some have put their trust in the certainty, absolutism, and occasional zealotry of Andrew Wakefield, Lisa Sykes, and Mark Geier, people who ask their followers to have unquestioning faith in theories contradicted by scientific evidence.

106

ANOTHER ASPECT OF OUR CULTURE—AND ONE REASON THE MMR and thimerosal controversies gained immediate attention—is that it’s easy to scare people. For example, beginning in the 1960s and 1970s, rumors that people had put razor blades into apples or poisoned Halloween candy swept across the nation. Everyone believed it. As a consequence, parents insisted that their children eat only prepackaged candy, schools opened their doors so that trick-or-treaters could have a safe environment, and hospitals offered to X-ray candy bags. In their book, Made to Stick: Why Some Ideas Survive and Others Die, Chip and Dan Heath examined the widespread belief that trick-or-treaters were at risk. They found that since 1958, no one had ever been harmed by a stranger’s Halloween candy. The urban myth had been spawned by two events. First, a five-year-old boy had overdosed on his uncle’s heroin; to cover his tracks, the uncle put heroin on the child’s candy. Second, a father, in a twisted attempt to collect insurance money, killed his son by sprinkling cyanide on his candy. “In other words,” wrote the Heaths, “the best social science evidence reveals that taking candy from a stranger is perfectly okay. It’s your family you should worry about.”

Although the fear of tainted Halloween candy isn’t based on a single occurrence, it hasn’t died, and it probably never will. Both California and New Jersey have passed laws specifically designed to punish candy tamperers. Similarly, laws banning thimerosal-containing vaccines have passed in several states despite clear evidence that these vaccines aren’t harmful. It’s much easier to scare people than to unscare them.

107

A FINAL CULTURAL ASPECT—AND YET ANOTHER REASON THAT the mercury-in-vaccines controversy stuck—is that it’s easy to appeal to the notion that we live in a sea of poisonous metals, toxic chemicals, and environmental pollutants. To be sure, some toxins in the environment can be quite dangerous. In the United States, high levels of lead in paint caused severe neurological problems in many children. And in Japan, the Minamata Bay disaster showed just how devastating large quantities of mercury can be. But these aren’t typical stories. For example, the media declared that dioxin, the chemical buried under the Love Canal in upstate New York, caused birth defects and miscarriages; that hexavalent chromium, the chemical used by Pacific Gas and Electric to coat its pipes (and the subject of the movie Erin Brockovich), caused a variety of illnesses from nosebleeds to cancer; that trichloroethylene, the chemical dumped by the W. R. Grace tannery into the local water supply (and the subject of the book and movie A Civil Action), caused a cluster of cancer cases in Woburn, Massachusetts; and that Alar, a pesticide featured on a 60 Minutes program titled “A is for Apple,” caused cancer. None of these stories was supported by subsequent scientific studies. But studies showing that certain chemicals in the environment aren’t harmful are far less compelling than personal testaments, riveting television shows, and block-buster movies claiming that they are. Steven Milloy, a graduate of the Johns Hopkins School of Hygiene and Public Health, the author of Junk Science Judo, and the creator of the popular Web site JunkScience.com, laments how the media are attracted to stories that scare people but not to those that reassure them. When Milloy approached Dateline NBC with a story about how fears of small quantities of dioxin were unfounded, he was rebuffed. “I was interviewed about our [dioxin] study by seemingly interested staff of the television news magazine Dateline NBC,” recalled Milloy. “After about twenty minutes of questions, it finally dawned on the staff person. ‘So, this isn’t a scare story?’ she said. ‘Then my producer won’t be interested.’”

Recently, the comedy team of Penn and Teller filmed a three-minute video for YouTube that showed just how easy it is to appeal to the public’s concern about chemicals in the environment. They sent a friend to a state fair to collect signatures on a petition to ban dihydroxymonoxide. Dihydroxy (two hydrogen atoms) monoxide (one oxygen atom) is H2O—water. The petitioner never lied. She said that dihydroxymonoxide was in our lakes and streams, and now it was in our sweat and urine and tears. We have to put a stop to this, she urged. Enough is enough. By using its chemical name, she was able to collect hundreds of signatures to ban water from the face of the earth.

The media bias toward stories that scare rather than reassure has left the public with a poor understanding of risk. “Hundreds of thousands of deaths a year from smoking is old hat,” writes Michael Fumento in Science Under Siege, “but possible death by toxic waste, now that’s exciting. The problem is [that] such presentations distort the ability of viewers to engage in accurate risk assessment. The average viewer who watches story after story on the latest alleged environmental terror can hardly be blamed for coming to the conclusion that cigarettes are a small problem compared with the hazards of parts per quadrillion of dioxin in the air, or for concluding that the drinking of alcohol, a known cause of low birth weight and cancer, is a small problem compared with the possibility of eating quantities of Alar almost too small to measure. This in turn results in pressure on the bureaucrats and politicians to wage war against tiny non-existent threats. The ‘war’ gets more coverage as these politicians and bureaucrats thunder that the planet could not possibly survive without their intervention, and the vicious cycle goes on.” As a consequence, people are more frightened by things that are less likely to hurt them. They are scared of pandemic flu but not epidemic flu (which kills more than 30,000 people a year in the United States); of botulism, tsunamis, and plagues but not strokes and heart attacks; of radon and dioxin but not French fries; of flying but not driving; of sharks but not deer; of MMR but not measles; and of thimerosal-containing influenza vaccine but not influenza. During the Alar scare, one mother sent state troopers after a school bus to confiscate an apple she had put in her child’s lunch bag; another called the International Apple Institute to ask if it was safe to pour apple juice down her kitchen drain or if she should take it to a toxic dump site.

108

THE VACCINE-AUTISM CONTROVERSY HAS SHOWN JUST HOW DIFFICULT it can be to communicate science to the public. Fortunately, during the past few years, many studies have investigated the true causes of autism; ironically, the media’s constant focus on vaccines has made it difficult for the public to hear about them.



If you find an error or have any questions, please email us at admin@doctorlib.org. Thank you!