‘According to Science’
Type “according to science” into a search engine, and it spits out more than 42 million results. Here’s a selection of some of the amazing things “science” has figured out:
- “The Exact Time to Dunk an Oreo, According to Science”
- “Here’s When Puppies are Most Adorable, According to Science”
- “Bald Men are More Attractive, According to Science.”
- “Why Cats Look Evil at Night, According to Science”
- “According to Science, Goats Like You More When You Smile”
The words “according to science” embody everything wrong with the way journalists write about research. They give the impression of unanimity when even ostensibly “settled” questions are rarely without controversy. They create the illusion that scientific knowledge is static when it is always in flux. They impart a uniform stamp of authority on work that might be bad science—or not science at all.
A staple of clickbait headlines, the phrase isn’t just found in tabloids and content mills. NPR has gotten in on the action with a story that promises to teach parents “How To Raise Brilliant Children, According to Science.” Even Scientific American can’t resist the urge to use the problematic attribution.
But treating science as a monolith can lead to a lot of mixed messages. One day “science” says coffee is the key to longevity. The next “science” is telling you it will give you cancer.
These contradictory stories are usually reporting on individual papers that yield conflicting results. “According to science” is often used as a more attention-grabbing way for headline writers to say “study finds.”
Single studies are rarely significant in and of themselves. They’re pieces of a larger puzzle. Only after they are put together in the form of a literature review or a meta-analysis is it possible to say what they mean—and even then it’s not always clear.
Some of the more interesting studies that make their way into the news come from the social sciences, which is in the middle of what has been termed a “replication crisis.” A study done in August attempted to reproduce the results of 21 different studies published in two top scientific journals and only managed to do so in 13 (62 percent). And when they did replicate, the effects were much less significant—on average by about 50 percent.
The situation is even more dire in the field of psychology. A 2015 attempt to mass-replicate 100 psychological experiments was only successful only 40 percent of time.
Furthermore, much of the fodder for “according to science” articles doesn’t even come from prestigious journals like Nature or Science. A lot of the studies originate from the margins of science in young disciplines that lack rigor and a strong empirical basis.
For example, evolutionary psychology deals in sexy or controversial topics that make for good reading, but the methodology tends to be suspect. In 2007, the New York Times ran a piece titled “Lap-Dance Science” about a study that supposedly found evidence showing strippers earn more money while menstruating.
The study’s co-author, University of New Mexico professor Geoffrey Miller, is loved by the pick-up artist community for his pop psychology books Mate: Become the Man Women Want and The Mating Mind. The Times goes to great lengths to make the study sound rigorous, emphasizing the apparently large amounts of data collected: “[T]hey gathered data via a Web site, where strippers logged in anonymously to provide information about their earnings, productivity and menstrual cycles during 296 work shifts (about 5,300 lap dances).”
But what the article fails to mention is the extremely small sample size. Only 19 strippers participated, which means the study is considered “low power.” The lower the power, the more likely the results are just a statistical fluke and don’t say anything meaningful. In the aforementioned study that prompted so much concern about a replication crisis, increasing the power five-fold in drastically reduced or eliminated effects.
While most of the examples used thus far were at the very least published in peer-reviewed journals, other “according to science” articles are based on the work of amateurs or even just raw data.
Whenever surveys are released by Ashley Madison, a site where married people go to seek affairs, they always become the subject of fluff articles about the “science of cheating.” Some take the survey’s job data and announce which professions are “more likely” to cheat. IT workers always score somewhere close to the top of the list.
But anyone with a basic level of training in the social sciences wouldn’t interpret this as evidence that tech nerds are necessarily more prone to cheat than anyone else. It’s just as likely that they are overrepresented in the survey because tech-minded cheaters might be more apt to use a website for their affairs.
Another article, titled “Ultimate Brewery Tour of the U.S., According to Science,” is based not on research of any kind but on a blogger who used an algorithm to determine the optimum route to visit the country’s 70 highest-rated breweries.
Given that most of these topics are trivial and harmless, the reader may ask: So what? A few sensationalist stories might not cause much damage, but there’s a latent danger in debasing the word “science.” In epistemology, the philosophy of knowledge, science and journalism are called “ways of knowing.” Managing the relationship between the two is vital to how we understand the world around us.
Science and journalism both aspire to describe natural and social phenomena as they are, but they diverge in important ways with regard to how they achieve this. Under greater political and commercial pressures, journalism is the more likely of the two to stray from the ideal of objectivity.
The motivation to sell more newspapers or generate more clicks can change the relative weight given to the different elements of newsworthiness, such as public interest, entertainment value and impact. The last one is crucial because media professionals and researchers define impact in different ways.
Reporters are driven to make their stories as impactful as possible and when writing about science, this often means exaggerating findings well beyond their actual value.
And so the meaning of science is flattened in the press. A paper cited once is no different than one cited 2,000 times. The opinions of the 3 percent of scientists who think climate change isn’t manmade are just as valid as those of the 97 percent who think it is. The research of a Nobel Prize-winner is the same as the back-of-envelope calculations of some anonymous blogger. It’s all “science.”
If all research appears equally valid to the public, then it’s only a matter of picking and choosing what one wants to believe—and often that’s whatever flatters one’s existing worldview. When people can’t come to a consensus about the basic nature of reality, the result is political and social polarization, with each side thinking their own view is the truth, “according to science.”