Chit Chat Across the Pond Lite logo Two interlocking ribbons, one red, one blue. The word Lite is across the logo in a playful font. The podfeet (two little bare feet) are in the upper right. The background is dark blue with Chat and Chat in the upper left and lower right corners with "across the pond" in thin letter across the bottom. Sounds more confusing than it is!

CCATP #601 – Dr. Maryanne Garry on Why Science is Broken

Dr. Garry, self-proclaimed Crusher of Dreams, joins us again on Chit Chat Across the Pond, this time to talk about how she thinks science is broken and why. Dr. Garry is a professor of Psychology at the University of Waikato, and also professor of the NZ institute of Security & Crime Science. In this episode, she takes on the topic of how science journals choose which papers should be published, and the flaws she sees in that process that cause the extraordinary claims to be published, rather than those efforts with less flashy premises. She backs up her thoughts with specific research that has been conducted in reviewing how well these claims of amazing success pan out over time.

Please follow Dr. Garry on Twitter at @drlambchop.

mp3 download

Rough notes for the discussion with links to sources:

BACKGROUND

It seems like every week or so, we read another headline or hear another story about some “discovery,” such as:
a. “Up to 25 cups of coffee a day still safe for your heart”
b. “fish oil reduces [whatever]”
c. Power posing makes you feel powerful

These stories come from various places:
—the press releases scientists and their universities send to the media,
— journals that publish a specific paper and want to publicize those findings

And not just about coffee or fish oil or a posture. You can find the same kinds of things about the results of clinical drug trials/or how your brain responds to some situation/or how coconut oil is going to save you/carbohydrates are bad/you’re a nicer person when you hold a warm beverage than a cold beverage…. whatever.

…and then maybe a few years later, you hear: oh wait, coffee….just kidding. Fish oil? Uh, we were wrong. This clinical drug trial? Didn’t pan out.

In the late 1980s, John Ioannidis (pronounced you-need-ees), a medical researcher, started to wonder about this roller coaster ride. He showed pretty clearly that the whole landscape/the whole structure/ that surrounds research inadvertently rewards the wrong things. It rewards surprising results; results that would have a big financial payoff for the university or drug company. It rewards scientists who publish many high-profile papers in the “top tier” marquee journals. Those are the papers the media covers most, that other scientists and doctors and clinical psychologists and various practitioners read the most.

But if you step back and think about it, science is not splashy. When you stand on the shoulders of giants and build on their work, most of science is probably incremental. You have a hypothesis, you make a prediction, because it’s the next logical step. It isn’t usually surprising. That doesn’t mean it’s not cool.

Also, think it through: when you find out you’re wrong, it’s not great, right? Science is supposed to be self-correcting. But the people who do science are….people. People don’t like to say “oh wow, so you know that really surprising finding I just published in a really prestigious journal ? It was wrong.” And the really prestigious journal doesn’t like to advertise “oh hey, that paper? Nope. Wrong.”

So Ioannidis focused on a set of articles that had been published over 10 years in medical journals and found that —as a result of further research—41% were wrong or exaggerated. But of course, the further research doesn’t make as big a splash as the original research.

And that, Ioannidis showed, is just the tip of the iceberg. He wrote a well known paper called “Why Most Published Research Findings Are False” (show notes at 1), a real nerd-out paper.

So: Ioannidis wasn’t the first one to voice this concern, but he did it so effectively that he kicked in motion what we now sometime call the “replication crisis” in science.
—-
In 2012, scientists followed up on 53 “landmark” cancer studies and found that 11% of them held up. [show notes at 2]
——-
In 2018, scientists followed up on a big bunch of high-profile papers from psychology, economics published in Nature or Science…the “saw off your arm to get your paper published in there” journals. They showed ~2/3 of those findings were reproducible.
—-
How did we get here? Science isn’t broken…but the system has moved away from the fundamental principles of science.
—science is self-correcting only if scientists correct the science.

What happens now is that research is so expensive, so high-stakes, that if you do some research and don’t get the results you expect, there are often real risks to your reputation. Sometimes to your job.

Around the world, universities have been put under so much pressure to justify their existence, to bring in external grants to fund what legislators used to fund, that it’s completely unsurprising the pressure trickles down to those of us who work at universities.

—you can promoted or not based on, among other things, the kinds of journals you publish in/how much external grant money you bring in.

—in the US, if you’re at an early part of your career, you can lose your job if you fail on these counts. So let’s put that another way: You’ve got 6 years to prove yourself as a good scientist…but really the criteria your university applies to measure how good a scientist you are ..are the very criteria that helped get us to this crisis in scientific research.

—So if you do a study, and you don’t get the results you expected, now all of a sudden journals don’t care. They don’t want to hear “I had this cool idea for surprising findings but it didn’t work…so hey.” Now what do you do instead if you’re a scientist? You think “Well, there must be something in this dataset that would be interesting. I can’t WASTE these data.” There’s even a fear that it’s unethical to spend taxpayer money and waste the time of the people who take part in our studies….and then just toss this study in the trash can.

BUT we also know that one of the principles of science is that you have to generate your hypotheses ahead of time. You can’t generate hypotheses after you know your results.

Also, the system as it is encourages scientists to be protective of their methods, data, materials….all the things that do not help the scientific community advance the science.

FIXES
Create and then lodge an analysis plan that specifies, ahead of time, “here are my hypotheses; here are the analyses we’ll do”
Make your data and research plan and the materials available to other scientists so they can check your work.

My colleague Simine Vazire says if you’re a scientist, you have to keep asking yourself this question: “How will I know if I am wrong?” One of the best ways to answer this question is to, as she says, “give your critics the ammunition they need to show you that you’re wrong.” That’s a hard thing to do when the stakes are so high.

We also have to change the system of incentives and rewards so that we don’t make rock stars out of the people who publish an unrealistic number of papers, an implausible number of those “holy cow” surprising findings, and instead go back to focusing on solid research….recognising that much of science is slow and incremental and not going to drive a lot of traffic to your university website or Buzzfeed or whatever.

Reference Papers:

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top