"In research, “statistical significance” is the odds-based method used to tell whether a finding is likely real or just chance. When a study claims p = 0.05, it’s essentially saying, “We’re 95% confident this is real.”
Since the early 20th century, when Sir Ronald Fisher formalized it,statistical significance has been central to research credibility. Journals prefer it. Publishers know their readers trust it. Researchers know their careers depend on it.
Ordak points out that many scientists admit they worry non-significant results will hurt their chances of publication. This pressure leads to a dangerous cycle: researchers shape their methods—sometimes inappropriately—to produce significant results.
Some even defend flawed analysis in peer review simply because it “got significant results.” This isn’t just cutting corners. It’s gaming the system.
But the numbers tell a darker story:
*Cancer research: Amgen tried to replicate 53 high-profile cancer studies—only 6 held up. Bayer reported similar results in 2011, failing to replicate 75% of cancer biology studies they tested. (Naik, 2011) (Errington et al, 2021)
*Psychology: Of 307 prominent findings, only 64% replicated at all—and those that did showed effect sizes about a third smaller than first reported. (Nosek et al, 2021)
*Neurology & disease research: 100 potential ALS drugs that once showed promise failed entirely in repeat trials. In spinal cord injury research, only 2 of 12 replications validated the original findings—one weakly and one only under special circumstances. (Errington et al, 2021)
Overall: Machine learning analysis of 40,000 psychology papers suggested only 40% might replicate. John Ioannidis famously concluded, “Most claimed research findings are false.” (Youyou et al, 2023) (Ioannidis, 2005)
---And here’s the kicker: A 2021 UC San Diego study (Serra-Garcia & Gneezy, 2021) found that papers that can’t be replicated are cited 153 times more than those that can—because they’re “interesting.” In other words, the more sensational your finding, the more attention you get—even if it’s wrong.
Max Planck put it best:
---And here’s the kicker: A 2021 UC San Diego study (Serra-Garcia & Gneezy, 2021) found that papers that can’t be replicated are cited 153 times more than those that can—because they’re “interesting.” In other words, the more sensational your finding, the more attention you get—even if it’s wrong.
Max Planck put it best:
“Science cannot solve the ultimate mystery of nature. And that is because, in the last analysis, we ourselves are a part of the mystery that we are trying to solve.”
CEH