SuggestedSources

What is the replication crisis?

Updated: 2025-07-17 23:31:26


Suggested Sources

📝 Edit sources on GitHub

https://www.speakandregret.michaelinzlicht.com/p/the-collapse-of-ego-depletion


AI Answers

With Suggested Sources

Without Suggested Sources


🤖 Claude’s Answer (Using Suggested Sources)

Error: Claude API error: Error code: 400 - {‘type’: ‘error’, ‘error’: {‘type’: ‘invalid_request_error’, ‘message’: ‘Your credit balance is too low to access the Anthropic API. Please go to Plans & Billing to upgrade or purchase credits.’}}


🤖 ChatGPT’s Answer (Using Suggested Sources)

The replication crisis (also called the reproducibility crisis) refers to a significant problem in science: a large number of studies have proven difficult or impossible to replicate, calling into question the reliability of their results. In 2005, epidemiologist John Ioannidis famously argued that most published research findings could actually be false, due to factors like small study sizes, biases, and flexible analyses (pmc.ncbi.nlm.nih.gov). A few years later, a series of failures to replicate high-profile findings (especially in psychology) led scientists to warn of a “replication crisis” in research. The term reflects growing concerns that many published results cannot be reproduced by independent researchers, undermining confidence in those findings. By 2016, even the broader scientific community acknowledged the problem: a Nature survey of 1,576 researchers found that 52% agreed there is a “significant” crisis of reproducibility in science (www.nature.com).

Empirical evidence of the crisis: In a landmark 2015 project, the Open Science Collaboration attempted to replicate 100 psychology experiments from top-tier journals; the replicators were able to reproduce the original results in less than half of the cases (www.science.org). Specifically, whereas 97% of the original studies had statistically significant findings, only 36% of the replication attempts achieved significant results – and the replicated effect sizes were on average roughly half the magnitude of the originals (www.science.org). Similarly sobering results have appeared in other fields. For example, pharmaceutical researchers at Amgen tried to reproduce 53 “landmark” cancer biology studies but succeeded in only 6 cases (~11% reproducibility) (brokenscience.org). Replicability problems have also been documented in economics, neuroscience, and other disciplines. Consistent with these patterns, many scientists report personal experience with failed replications: in the same 2016 survey, more than 70% of researchers said they had tried and failed to reproduce another scientist’s experiments (and over half had failed to reproduce their own results) (www.nature.com). These findings suggest that a substantial portion of published results may be unreliable or exaggerated.

Causes of the replication crisis: Researchers have identified several contributing factors. One major issue is publication bias – the tendency for journals to publish positive, novel findings while leaving negative or inconclusive results unpublished. This bias skews the literature toward exciting outcomes, making it appear that effects are more robust than they really are. In addition, many scientists have engaged in questionable research practices that inflate false positives. Examples include p-hacking (analyzing data in many ways until a significant result appears), HARKing (forming hypotheses after results are known), and selective reporting of only favorable experiments. Such practices can produce striking findings that are actually spurious. As Ioannidis noted, studies are less likely to report true results when they use small samples, have flexible designs or analyses, or are conducted amid strong biases and competition – conditions that were quite common in many fields (pmc.ncbi.nlm.nih.gov). Low statistical power (insufficient data) in many studies meant that even true effects could be missed, while false positives could slip through. All of these factors – combined with pressure on researchers to publish frequently and produce novel findings – created fertile ground for non-replicable results in the literature.

Responses and perspectives: The recognition of the replication crisis has prompted a range of reforms and some debate about its implications. On one hand, the scientific community has embraced open science practices to improve reproducibility. Many journals and institutions now encourage steps like preregistering study designs (to prevent HARKing), sharing data and code, conducting replication studies, and using more rigorous statistical criteria. These initiatives aim to make research more transparent and robust. Some experts frame this trend in a positive light. For example, sociologist Daniele Fanelli argues that the narrative of science “in crisis” is overblown, suggesting that scientific quality is not in free fall and that recent reproducibility efforts represent a healthy evolution of scientific practice rather than a disaster (www.pnas.org). Likewise, psychologist Simine Vazire prefers to describe the changes as a “credibility revolution” instead of a crisis, emphasizing how improved methodologies and openness are strengthening the credibility of research (plato.stanford.edu). In sum, there is broad agreement that reproducibility issues are real and must be addressed, but scholars differ in whether they view the situation as a dire crisis or as a constructive self-correction that will ultimately make science more reliable.

Sources:

  1. John P. A. Ioannidis (2005)“Why Most Published Research Findings Are False.” Ioannidis argues that a variety of biases and methodological weaknesses (such as small sample sizes, flexible analyses, and publication bias) make it likely that many published research claims are false (pmc.ncbi.nlm.nih.gov). This landmark paper was an early warning about unreliable results in research, laying groundwork for the concerns that later fueled the replication crisis. Link: PLoS Medicinencbi.nlm.nih.gov

  2. Monya Baker (2016)“1,500 scientists lift the lid on reproducibility.” Baker, a Nature journalist, surveyed 1,576 researchers about reproducibility. Over 70% reported failing to reproduce another scientist’s results, and about half had failed to reproduce their own experiments (www.nature.com). Notably, 52% of scientists characterized the situation as a “significant crisis”, although most still expressed trust in the published literature (www.nature.com). Baker’s article also notes that prior analyses found only ~40% of psychology studies and ~10% of cancer biology studies could be replicated (www.nature.com), highlighting the scope of the problem. Link: Nature News – nature.com

  3. Open Science Collaboration (2015)“Estimating the reproducibility of psychological science.” In this large-scale study led by Brian Nosek, 270 researchers attempted to replicate 100 findings from high-ranking psychology journals. The project found that less than half of the original findings could be successfully reproduced. Only 36% of the replication attempts yielded statistically significant results (versus 97% of the original studies), and replication effect sizes were much smaller on average (www.science.org). This landmark paper provided concrete evidence of a reproducibility problem in psychology and spurred renewed calls for methodological reform. Link: Sciencescience.org

  4. C. Glenn Begley & Lee M. Ellis (2012)“Drug development: Raise standards for preclinical cancer research.” Begley and Ellis reported that Amgen scientists could replicate just 6 of 53 “landmark” studies in preclinical cancer biology (∼11% success rate) (brokenscience.org). They highlighted serious flaws in research practices – attributing the low reproducibility to poorly designed studies, publication pressures, and lack of methodological rigor – and called for higher standards in methodology and transparency. This finding showed that the replication crisis was not confined to psychology; it also affects biomedical research. Link: Nature commentary – brokenscience.org

  5. Daniele Fanelli (2018)“Is science really facing a reproducibility crisis, and do we need it to?” Fanelli contends that the narrative of a widespread “crisis” in science may be exaggerated (www.pnas.org). He presents evidence that the quality and integrity of research have not decayed as drastically as some fear, and he argues for a more optimistic interpretation of recent changes. According to Fanelli, today’s emphasis on replication and transparency is an empowering improvement to scientific practice rather than a sign of imminent collapse. Link: PNAS (Opinion) – pnas.org

  6. Simine Vazire (2018)“Implications of the Credibility Revolution for Productivity, Creativity, and Progress.” Vazire, a psychologist, calls the ongoing reforms a “credibility revolution” rather than a crisis. She highlights that the push for transparency, replication, and higher standards is strengthening the credibility of science. This view emphasizes the optimistic side of the replication crisis – focusing on how the scientific community is actively correcting itself and improving research practices (plato.stanford.edu). Link: Perspectives on Psychological Science (as cited in Stanford Encyclopedia) – plato.stanford.edu


🤖 Gemini’s Answer (Using Suggested Sources)

The replication crisis, also known as the reproducibility crisis, is an ongoing methodological crisis primarily affecting the social and medical sciences [1, 6]. It refers to the finding that the results of many published scientific studies are difficult or impossible to reproduce when other researchers attempt to replicate the original experiment. This failure to replicate key findings has cast doubt on the reliability of a significant body of scientific literature and has prompted calls for widespread reform of scientific practices [4].

The crisis gained widespread attention following a landmark 2015 study published in Science. In this project, a large team of researchers attempted to replicate 100 prominent experimental and correlational studies from top psychology journals. They found that while 97% of the original studies had reported statistically significant results, only 36% of the replication attempts did [1]. This discrepancy highlighted a systemic problem. Subsequent analyses and models suggest a high rate of false or exaggerated claims; one model, for example, estimated that approximately 75% of psychology claims may be false [2]. While this specific number is a high-end estimate, other analyses confirm that factors like publication bias and low statistical power have historically created an environment where many published findings are likely to be false discoveries [7].

The problem is not confined to psychology. The medical field faces a similar crisis, with numerous clinical trials being deemed untrustworthy due to flawed or fabricated data [6]. This issue is particularly severe in areas like Alzheimer’s research, where allegations of data fraud in foundational studies have potentially misdirected billions of dollars in funding and decades of research, delaying the search for a cure [3].

Key Causes of the Replication Crisis:

Prominent Examples:

The crisis is well-illustrated by the re-examination of famous theories.

The consequences of the replication crisis are severe. It erodes public trust in science, wastes taxpayer money on flawed research, and, in the medical field, can lead to dangerous treatments and a staggering death toll from scientific lies [3, 5]. However, the crisis has also sparked a major reform movement within science focused on transparency and rigor, including practices like pre-registering study hypotheses and methods, sharing data openly, and placing a higher value on direct replication studies.

Sources

  1. Estimating the Reproducibility of Psychological ScienceScience
    This foundational 2015 paper by the Open Science Collaboration provides the primary empirical evidence for the replication crisis. The authors attempted to replicate 100 psychology studies and found that the rate of significant findings in the replications (around 36%) was far lower than in the original studies (97%), demonstrating a systemic problem with reproducibility in the field.

  2. ~75 % of Psychology Claims Are FalseUnsafe Science (Substack)
    This article argues for a high rate of false claims in psychology. The author synthesizes findings from replication projects and applies a statistical model to estimate that the “false discovery rate” in psychology could be as high as 75%, painting a stark picture of the crisis’s scale.

  3. The Long Shadow of Fraud in Alzheimer’s ResearchThe New York Times
    This opinion piece illustrates the real-world consequences of the crisis beyond social science. The author argues that major fraud allegations in Alzheimer’s research have derailed the field for years, wasting billions of dollars and giving false hope, thereby highlighting how scientific misconduct can impede progress on critical health issues.

  4. Revisiting Stereotype Threat: A Reckoning for Social PsychologyMichael Inzlicht
    Written by a prominent researcher in the field, this essay provides an insider’s view of the crisis through the lens of a specific theory, stereotype threat. Inzlicht describes how replication failures and new meta-analyses have weakened the evidence for this once-celebrated concept, forcing the field to confront its methods and past certainties.

  5. The Staggering Death Toll of Scientific LiesVox
    This article focuses on the most severe consequences of scientific misconduct. The author connects fraudulent research directly to human harm and death, making a powerful case that research fraud is not a victimless crime and should be treated with greater seriousness.

  6. Medicine Is Plagued by Untrustworthy Clinical Trials. How Many Studies Are Faked or Flawed?Nature
    This report demonstrates that the replication crisis is a major problem in medicine. The author details the work of researchers who uncover faked or fatally flawed data in clinical trials (“zombie trials”), showing that a significant portion of the medical literature may be unreliable and potentially harmful.

  7. Are Most Published Research Findings False? Trends in Statistical Power, Publication Selection Bias, and the False Discovery Rate in Psychology (1975–2017)PLOS One
    This research article provides a quantitative historical analysis of the causes of the replication crisis in psychology. The authors find that while statistical power in studies has slowly increased over time, strong publication bias in favor of positive results has persisted, creating conditions where a high rate of false findings is expected.

  8. The Collapse of Ego Depletion
    In this essay, researcher Michael Inzlicht details the fall of another major psychological theory, “ego depletion.” He describes it as a canonical example of a widely believed and taught scientific idea that crumbled when subjected to large-scale, rigorous, and pre-registered replication attempts.


🤖 Grok’s Answer (Using Suggested Sources)

The replication crisis refers to a widespread methodological and systemic issue in scientific research, particularly in fields like psychology, medicine, and social sciences, where a significant proportion of published findings cannot be reliably reproduced or replicated by independent researchers [1][2][7]. This crisis highlights problems with the reliability, validity, and trustworthiness of scientific claims, often stemming from factors such as poor study design, statistical misuse, publication biases, and in some cases, outright fraud [3][5][6]. It gained prominence in the early 2010s but has roots in earlier critiques, and it has prompted calls for reforms like preregistration of studies, open data sharing, and increased emphasis on replication efforts [1][4][8].

Origins and Key Evidence

The term “replication crisis” emerged prominently following large-scale efforts to test the reproducibility of established findings. A landmark study in 2015 by the Open Science Collaboration attempted to replicate 100 experiments from high-profile psychology journals and found that only 39% produced statistically significant results consistent with the originals, with effect sizes often much smaller [1]. This suggested that many “statistically significant” findings might be false positives or exaggerated due to issues like low statistical power (i.e., small sample sizes that fail to detect true effects reliably) and selective reporting [7]. Extending this, analyses have estimated that up to 75% of claims in psychology could be false, driven by publication biases where journals favor novel, positive results over null or negative ones [2][7]. For instance, a 2023 study reviewing trends in psychology from 1975 to 2017 found persistent low statistical power (around 33% on average) and high false discovery rates, meaning many published “discoveries” are likely untrue [7].

The crisis is not limited to psychology. In medicine, similar issues plague clinical trials, with estimates suggesting that 18% or more of studies may be faked or fatally flawed due to data fabrication, plagiarism, or inadequate oversight [6]. High-profile cases of fraud, such as manipulated images in Alzheimer’s research papers that influenced billions in funding and drug development, illustrate how non-replicable or fraudulent findings can persist and mislead entire fields for years [3][5]. Vox has highlighted the “staggering death toll” from such scientific lies, noting instances where flawed research led to harmful medical practices, like unnecessary surgeries or ineffective treatments, potentially contributing to thousands of deaths [5].

Contributing Factors

Several interconnected factors fuel the crisis:

Impacts and Responses

The crisis erodes public trust in science and has real-world consequences, such as misguided policies, wasted funding (e.g., over $28 billion annually in the U.S. on irreproducible biomedical research [10]), and harm to patients [5][6]. In psychology, it has led to a “reckoning,” with fields like social psychology undergoing self-correction through meta-analyses and replication initiatives [4][8]. Responses include the adoption of open science practices, such as the Center for Open Science’s guidelines for transparency [1], and calls for journals to prioritize replications [2]. However, challenges persist, as fraud detection remains inconsistent, and some fields lag in reforms [3][6].

While the crisis is most discussed in “soft” sciences, it affects all empirical fields to varying degrees, underscoring the need for rigorous, transparent methods to ensure scientific progress [7][9].

Sources

[1] Estimating the Reproducibility of Psychological ScienceScience: Authors from the Open Science Collaboration present empirical evidence of low reproducibility in psychology (only 39% of studies replicated), viewing it as a wake-up call for methodological reforms. Link

[2] ~75% of Psychology Claims Are FalseUnsafe Science (Substack): The author argues that statistical biases and low power make most psychology claims unreliable, estimating a 75% false rate and advocating for skepticism toward unverified findings. Link

[3] The Long Shadow of Fraud in Alzheimer’s ResearchThe New York Times: The opinion piece details how fraudulent data in Alzheimer’s studies cast a long shadow, eroding trust and delaying cures, with the author emphasizing the need for accountability in biomedical research. Link

[4] Revisiting Stereotype Threat: A Reckoning for Social PsychologyMichael Inzlicht: Inzlicht critiques the stereotype threat effect as largely non-replicable, viewing it as emblematic of social psychology’s replication issues and calling for humility and better methods. Link

[5] The Staggering Death Toll of Scientific LiesVox: The article highlights deadly consequences of fraudulent or irreproducible research, arguing that scientific misconduct contributes to preventable harm and requires stricter penalties like jail time. Link

[6] Medicine Is Plagued by Untrustworthy Clinical Trials. How Many Studies Are Faked or Flawed?Nature: The authors estimate high rates of faked or flawed medical trials, viewing this as a systemic plague that undermines evidence-based medicine and urging better detection and transparency. Link

[7] Are Most Published Research Findings False? Trends in Statistical Power, Publication Selection Bias, and the False Discovery Rate in Psychology (1975–2017)PLOS One: Researchers document persistent biases in psychology, concluding that many findings are false due to low power and selection effects, and recommend reforms to reduce false discoveries. Link

[8] The Collapse of Ego DepletionMichael Inzlicht’s blog: Inzlicht describes the failure of ego depletion to replicate, seeing it as part of psychology’s broader crisis and promoting open science to rebuild credibility. Link

[9] Why Most Published Research Findings Are FalsePLOS Medicine (additional source): Ioannidis provides a theoretical framework explaining why biases lead to false findings, foundational to understanding the replication crisis across sciences. Link

[10] Rescuing US Biomedical Research from Its Systemic FlawsPNAS (additional source): Authors estimate economic costs of irreproducibility, viewing the crisis as systemic and proposing policy changes for sustainability. Link


🤖 Claude’s Answer (Without Using Suggested Sources)

Error: Claude API error: Error code: 400 - {‘type’: ‘error’, ‘error’: {‘type’: ‘invalid_request_error’, ‘message’: ‘Your credit balance is too low to access the Anthropic API. Please go to Plans & Billing to upgrade or purchase credits.’}}

🤖 ChatGPT’s Answer (Without Using Suggested Sources)

Overview: The replication crisis (or reproducibility crisis) refers to a pervasive problem where many scientific studies cannot be replicated by independent researchers, calling into question the reliability of those findings. In science, a result isn’t truly solid until it can be repeated under the same conditions, so widespread failures to replicate undermine confidence in published research (www.ebsco.com) (iiab.me). The crisis gained prominent attention in the early 2010s after several high-profile findings – especially in psychology and social sciences – fell apart when retested. For example, a famous 2011 study claimed evidence for “precognition” (people ostensibly sensing future events), but other researchers could not reproduce its results, casting doubt on such extraordinary claims (www.ebsco.com). In 2015, a large collaborative project tried to replicate 100 psychology experiments published in top journals: over two-thirds of those replications failed to get the same result as the original studies (www.ebsco.com). Similar patterns have been observed elsewhere. In fields like biomedicine, independent checks of important findings have often failed: one analysis of landmark cancer biology papers found only ~11% could be confirmed by follow-up experiments (www.nature.com). In a 2016 survey of 1,576 scientists (across chemistry, biology, medicine, physics, psychology, etc.), over 70% admitted they had tried and failed to reproduce another scientist’s results, and ~50% even failed to reproduce their own results. More than half of those surveyed agreed that science faces a significant “reproducibility crisis” (www.nature.com) (www.nature.com). These revelations sparked widespread concern because they suggest that a portion of published findings – even in leading journals – might not be as reliable as assumed.

Causes of the Crisis: Researchers have scrutinized why so many studies don’t replicate, and they’ve uncovered a combination of systemic issues in how science is conducted and published. Key factors include:

All these factors together create a perfect storm: many published findings were never rigorously checked or were pushed through biased analyses, so it’s not surprising that when scientists later try to replicate those experiments with more careful methods, a large proportion fail to confirm the original claims (plato.stanford.edu) (plato.stanford.edu). The crisis, therefore, isn’t usually that scientists are deliberately cheating; it’s that the traditional way of doing science left too much room for bias, error, and one-off results to slip into the literature as “facts.”

Efforts to Improve Reproducibility: The upside of the replication crisis is that it has triggered a reform movement in science. In response to the alarm, researchers, institutions, and journals are implementing changes to increase transparency and reliability. Key initiatives include:

Notably, the scrutiny of this “crisis” has given birth to the field of metascience (or meta-research) – the scientific study of scientific practice itself (britac.co.uk). Metascience researchers empirically examine issues like reproducibility, publication bias, and methodology across disciplines, providing evidence to guide reforms. In a sense, science is turning the lens on itself to understand where things go wrong and how to fix them.

Perspectives and Ongoing Debate: There is broad consensus that improving reproducibility is crucial, but experts differ on how severe the problem truly is and how to characterize this period of change. One view is that the replication crisis exposed serious, widespread flaws in research that must be addressed to restore credibility. This camp often uses the language of crisis and has been instrumental in pushing for reforms. For example, Ioannidis’s 2005 paper bluntly argued that most published findings might be false, highlighting how pervasive the problems could be (journals.plos.org) (journals.plos.org). Many psychologists and biologists have since acknowledged a kind of reckoning: longstanding practices were producing lots of unreliability, so substantial change is needed to rebuild trust in the literature.

Another view is more nuanced, suggesting that claims of “science in crisis” are overstated or too blanket. Some scholars argue that while reproducibility issues exist, they may be confined to certain fields or a subset of studies rather than infecting all of science (www.pnas.org) (www.pnas.org). For instance, a 2018 analysis by Daniele Fanelli questioned the doomsday narrative: it pointed out that a majority of published results in many areas might still be valid, that the prevalence of faulty research varies between subfields, and that there isn’t clear evidence that things have been getting worse over time (www.pnas.org) (www.pnas.org). Fanelli and others also note that science has a self-correcting nature – the very fact that we’re discovering irreproducible results and discussing them is a sign that science is working as it should, weeding out errors (www.nature.com) (www.nature.com). From this perspective, using the term “crisis” for all of science could be counterproductive or too pessimistic. They prefer to emphasize the positive steps being taken and remind that many findings do replicate (for example, the Nature survey found most scientists still trust the bulk of literature in their field, despite the concerns (www.nature.com) (www.nature.com)).

Indeed, a third perspective reframes the situation not as an existential crisis, but as a “credibility revolution” in science. Rather than viewing reproducibility problems as a disaster, many see the current era as an opportunity to greatly strengthen scientific practice. This outlook, championed by leaders of the open-science movement, stresses the improvements in openness, rigor, and collaboration that have emerged in response to past failures (pmc.ncbi.nlm.nih.gov) (pmc.ncbi.nlm.nih.gov). For example, psychologist Simine Vazire suggested that the reforms sparked in psychology constitute a credibility revolution – a fundamental shift toward transparency and reliability that will ultimately increase confidence in research findings (plato.stanford.edu) (plato.stanford.edu). In 2023, a group of researchers even argued that the so-called replication “crisis” has led to positive structural changes in how science is done – from better training and statistical methods to more inclusive, multi-center studies – which will have long-term benefits for the research community (pmc.ncbi.nlm.nih.gov) (pmc.ncbi.nlm.nih.gov). According to this view, the heavy scrutiny on replication is a sign of a healthy scientific process correcting itself, and while it may be challenging, it’s not a cause to despair about science – it’s a reason to improve science.

In summary, the replication crisis has been a wake-up call that shook several fields into evaluating their practices. It revealed that many published results were less solid than we thought. In response, scientists are changing how they work – improving methods, encouraging transparency, and double-checking important findings more often. There’s active debate about the extent of the problem, but nearly everyone agrees on its core message: replicability is crucial for science, and by confronting this challenge head-on, researchers can ensure that scientific knowledge is built on more reliable foundations for the future (www.news-medical.net) (plato.stanford.edu).

Sources:

  1. Cait Caffrey (2024)“Replication crisis”EBSCO Research Starters (Science). This source defines the replication crisis as the difficulty of reproducing many scientific results and notes it became prominent in the early 2010s after multiple high-profile studies (e.g. a claim of precognition) failed to replicate. It explains that replication is a cornerstone of scientific validity and describes a 2015 effort which found over two-thirds of tested psychology findings did not hold up. Caffrey discusses suspected causes (questionable research practices, small sample sizes, lack of data access) and proposed solutions like pre-registering studies, larger samples, and open data. URL: EBSCO

  2. Monya Baker (2016)“1,500 scientists lift the lid on reproducibility”Nature News. Baker reports on a large survey about the reproducibility “crisis” in science. Notably, 70% of surveyed researchers said they had failed to reproduce another scientist’s experiments (and ~50% failed to reproduce their own work). 52% of respondents characterized the situation as a significant crisis. The article also highlights concrete evidence of the problem: previous analyses found that only around 40% of psychology studies and ~10% of cancer-biology studies could be replicated. Baker explores scientists’ mixed attitudes — many acknowledge the problem while still trusting literature overall — and frames the reproducibility issue as a broad, cross-disciplinary challenge now coming to light. URL: Nature

  3. John P. A. Ioannidis (2005)“Why Most Published Research Findings Are False.”PLoS Medicine 2(8): e124. In this influential essay, Ioannidis uses statistical arguments to claim that a majority of published research results might be false positives. He identifies factors that undermine reliability: small study sizes, tiny effect sizes, flexibility in study designs and analyses, researcher bias, and the sheer number of hypotheses tested. His simulations and theoretical model suggest that under typical conditions (with prevalent biases and errors), the probability that a research claim is true can be quite low. This provocative paper sounded an early alarm about systemic issues in scientific research; it essentially anticipated the replication crisis by arguing that many findings won’t replicate because they were never true results to begin with. URL: PLoS Medicine

  4. Daniele Fanelli (2018)“Is science really facing a reproducibility crisis, and do we need it to?”Proceedings of the National Academy of Sciences (PNAS) 115(11): 2628–2631. Fanelli’s article challenges the narrative of a universal replication crisis. He acknowledges that reproducibility and integrity issues exist, but presents evidence that these problems likely affect a minority of published studies and are not worsening over time. The paper argues that the “science is broken” crisis rhetoric is exaggerated; for instance, surveys indicate outright data fraud is rare (~1–2% of scientists admit to falsification), and not all fields are equally affected by replication difficulties. Fanelli suggests that a more accurate and inspiring narrative is one of improvement and empowerment: science is undergoing reforms to address issues, rather than spiraling into dysfunction. His view provides a counter-balance, emphasizing that most science remains sound and that recent initiatives are strengthening research reliability. URL: PNAS

  5. Stanford Encyclopedia of Philosophy (2018)“Reproducibility of Scientific Results” (entry by Leonelli & Fidler et al.). This comprehensive overview breaks down what the replication (reproducibility) crisis entails and its implications. It outlines multiple facets of the problem: the historical lack of published replication studies, evidence of widespread publication bias, the high prevalence of questionable research practices (like p-hacking and selective reporting) that inflate false findings, and deficiencies in transparency (incomplete reporting of methods and data). The entry also describes the rise of the open science reform movement designed to combat these issues – for example, encouraging data sharing, preregistration of studies, and journals accepting null results and replications. Notably, the authors mention that some researchers prefer the term “credibility revolution” to highlight the positive changes (improved methods, more openness) sparked by the crisis, rather than focusing on pessimistic aspects. This source provides a balanced, scholarly summary of the crisis and the ongoing reforms. URL: Stanford Encyclopedia of Philosophy

  6. Max Korbmacher et al. (2023)“The replication crisis has led to positive structural, procedural, and community changes.”Communications Psychology 1, Article 3. Korbmacher and colleagues present a Perspective piece that reframes the replication crisis in a constructive light. They argue that what has been called a crisis is actually driving a “credibility revolution” in science. The paper outlines how large-scale replication projects (which initially revealed low reproducibility rates) became a catalyst for significant improvements in research culture and practice. These authors highlight reforms such as better training in statistics and methodology, new publication practices (like Registered Reports), collaborative multi-lab studies, and greater diversity and inclusion in research efforts. The overall view expressed is optimistic: the challenges exposed by the replication crisis are being met with concerted action, leading to long-term structural changes that will markedly improve the trustworthiness of scientific research. URL: Communications Psychology via PMC

🤖 Gemini’s Answer (Without Using Suggested Sources)

The replication crisis, also known as the replicability crisis or reproducibility crisis, is an ongoing methodological crisis in science where researchers have found that the results of many scientific studies are difficult or impossible to replicate or reproduce when the original study’s methodology is repeated [1, 3]. This has led to growing concerns about the credibility and robustness of published scientific research, particularly in fields like social psychology, medicine, and economics.

Replication is a fundamental tenet of the scientific method. If a finding is real and robust, an independent researcher should be able to follow the same procedures and get a similar result. The failure to do so undermines the original claim and suggests that the original finding may have been a statistical fluke, the result of flawed methodology, or influenced by factors not reported in the original paper [2, 5].

Key Evidence and Milestones

The crisis gained widespread attention through a series of high-profile events and large-scale studies:

  1. Theoretical Foundations: In 2005, epidemiologist John Ioannidis published a seminal paper titled “Why Most Published Research Findings Are False” [2]. He argued mathematically that, due to a combination of low statistical power, publication bias, and a focus on novel findings, a majority of published results in the medical literature were likely to be false positives.

  2. Large-Scale Replication Projects: The most influential empirical evidence came from the Open Science Collaboration, which in 2015 published the results of a massive effort to replicate 100 psychology studies from top journals. They found that only 36% of the replications produced statistically significant results, compared to 97% of the original studies that did [1]. Furthermore, the average effect size of the replicated studies was less than half the magnitude of the original findings.

  3. High-Profile Failures: Several famous findings that had become staples of textbooks and popular science failed to replicate robustly. A well-known example is the “power posing” research, which claimed that adopting expansive body postures could change hormone levels and risk-taking behavior. While the original study was highly influential, subsequent, more rigorous replications failed to reproduce the key physiological effects [4].

Contributing Causes of the Crisis

The replication crisis is not attributed to a single cause but rather a systemic set of research and publication practices that create perverse incentives for researchers [3, 4].

Proposed Solutions and the Movement for Open Science

The recognition of the replication crisis has spurred a significant reform movement within science aimed at improving transparency, rigor, and credibility. Key solutions include:

  1. Preregistration and Registered Reports: Researchers specify their hypothesis, methods, and analysis plan before collecting data and submit it to a public repository or a journal. In a Registered Report, this plan is peer-reviewed, and if accepted, the journal commits to publishing the results regardless of the outcome. This directly combats p-hacking, HARKing, and publication bias [4].
  2. Open Science Practices: There is a growing push for researchers to make their data, analysis code, and materials publicly available. This allows other scientists to verify the findings, re-analyze the data, and more easily conduct direct replications [1, 4].
  3. Promoting and Funding Replication: Institutions and funding bodies are beginning to place a higher value on replication studies. Initiatives like the Open Science Collaboration and journals dedicated to publishing replications have made this work more visible and impactful [1].
  4. Statistical Reform: Many advocate for moving away from a sole reliance on p-values and instead emphasizing effect sizes (how large is the effect?), confidence intervals (what is the range of uncertainty?), and Bayesian statistical methods, which can provide a more nuanced picture of the evidence [5].

In conclusion, the replication crisis is a critical self-examination of scientific practices. While it has challenged the trustworthiness of many published findings, it has also catalyzed a powerful movement toward a more transparent, rigorous, and ultimately more reliable science. This process is often viewed as a healthy sign of science’s capacity for self-correction [5].


Sources

  1. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science.
    • Author’s View: This large-scale, collaborative paper provides the primary empirical evidence for the replication crisis in psychology. By attempting to replicate 100 prominent studies, the authors demonstrate that a significant majority (over 60%) failed to replicate successfully, providing a stark, data-driven wake-up call to the scientific community.
    • URL: https://www.science.org/doi/10.1126/science.aac4716
  2. Ioannidis, J. P. A. (2005). Why most published research findings are false.
    • Author’s View: Ioannidis presents a theoretical and mathematical argument that systemic issues—such as small study sizes, small effect sizes, publication bias, and a high degree of flexibility in research designs and analysis—logically lead to a situation where most published claims are likely to be incorrect. This paper was a foundational and provocative precursor to the empirical replication studies that followed.
    • URL: https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
  3. The Economist. (2013). How science goes wrong.
    • Author’s View: This article provides a journalistic overview of the replication crisis for a broader audience. It synthesizes the problem by focusing on the perverse incentives in academia (“publish or perish”) and questionable research practices (like p-hacking) that contribute to a body of literature filled with unreliable findings. It frames the issue as a systemic problem with how science is managed and funded.
    • URL: https://www.economist.com/leaders/2013/10/19/how-science-goes-wrong
  4. Nosek, B. A., & Lakens, D. (2014). Registered reports: A method to increase the credibility of published results.
    • Author’s View: Nosek and Lakens, both prominent figures in the open science movement, champion a concrete solution to the crisis: Registered Reports. They argue that this publication format, where studies are accepted for publication based on the rigor of their methods before results are known, effectively eliminates publication bias and QRPs like p-hacking and HARKing, thereby improving the credibility of the scientific record.
    • URL: https://royalsocietypublishing.org/doi/10.1098/rsos.140459
  5. Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2018). Making replication mainstream.
    • Author’s View: The authors provide a nuanced perspective, arguing that a single failed replication does not automatically invalidate an original finding. Instead, they call for a more systematic approach to replication, where failures help researchers understand the boundary conditions of an effect. They frame the crisis not as a sign that science is broken, but as an opportunity to strengthen its self-correcting mechanisms by making replication a routine and valued part of the scientific process.
    • URL: https://journals.sagepub.com/doi/full/10.1177/2515245918755458

🤖 Grok’s Answer (Without Using Suggested Sources)

The replication crisis refers to a widespread methodological and systemic issue in scientific research, particularly in fields like psychology, medicine, economics, and social sciences, where a significant proportion of published studies cannot be successfully replicated by independent researchers. This phenomenon undermines the reliability of scientific findings and has prompted calls for reforms in research practices. Below, I provide a comprehensive overview, including its definition, historical context, causes, notable examples, impacts, and ongoing responses.

Definition and Historical Context

The replication crisis is characterized by the failure to reproduce the results of previously published experiments or studies when they are repeated under similar conditions. Replication is a cornerstone of the scientific method, serving as a check on validity and generalizability [1]. The crisis gained prominence in the early 2010s, but concerns date back earlier. For instance, statistician John Ioannidis highlighted in 2005 that statistical and methodological flaws could render most published research findings false, especially in fields with flexible designs and small effects [2]. The term “replication crisis” became widely used around 2011 following high-profile replication failures in psychology, such as Daryl Bem’s study on precognition, which could not be replicated [3].

A pivotal moment was the 2015 Reproducibility Project: Psychology, led by the Open Science Collaboration, which attempted to replicate 100 prominent psychology studies and found that only about 39% produced statistically significant results consistent with the originals [4]. Similar issues have since been documented in other disciplines, including cancer biology (where replication rates were around 50% in a 2018 study) and economics [5].

Causes

Several interconnected factors contribute to the replication crisis:

  1. Publication Bias and “File Drawer” Problem: Journals tend to favor novel, positive results over null or negative findings, leading researchers to suppress non-significant results. This creates a skewed literature where only “successful” studies are visible [2].

  2. P-Hacking and Questionable Research Practices (QRPs): Researchers may manipulate data analysis (e.g., selectively reporting outcomes, stopping data collection when results are favorable, or excluding outliers) to achieve statistical significance (typically p < 0.05). This inflates false positives without outright fraud [1][3].

  3. Small Sample Sizes and Low Statistical Power: Many studies use underpowered samples, making results prone to chance fluctuations and less likely to replicate. Ioannidis noted that in fields with small effect sizes, low power exacerbates false discoveries [2].

  4. Lack of Transparency: Incomplete reporting of methods, data, and analysis plans makes replication difficult. Incentives in academia (e.g., “publish or perish”) prioritize quantity over rigor [4].

  5. Field-Specific Issues: In psychology and social sciences, human behavior’s variability and contextual factors (e.g., cultural differences) add challenges, unlike more controlled fields like physics [5].

Notable Examples

Impacts

The crisis has eroded public trust in science, with implications for policy, healthcare, and education. For example, unreliable findings have influenced flawed interventions, such as in mental health treatments or economic policies [3]. It has also wasted resources: Billions in research funding may support non-replicable work [2]. Broader effects include skepticism toward expertise (e.g., during the COVID-19 pandemic) and calls for reevaluating tenure and funding systems [5]. However, it has also spurred positive change by highlighting the need for better science [4].

Responses and Reforms

Efforts to address the crisis emphasize “open science” practices:

While progress is being made, challenges remain, such as resistance from established researchers and the slow pace of cultural change [1].

In summary, the replication crisis reveals deep flaws in how science is conducted and disseminated but also offers an opportunity for improvement. Addressing it requires collective action to prioritize reproducibility over sensationalism.

Sources

[1] Baker, M. (2016). This Nature news feature explains the replication crisis as a systemic issue driven by incentives and QRPs, emphasizing the need for transparency reforms. 1,500 scientists lift the lid on reproducibility

[2] Ioannidis, J. P. A. (2005). The author argues that most published findings are false due to biases, low power, and flexibility in analyses, laying early groundwork for understanding the crisis. Why Most Published Research Findings Are False

[3] Camerer, C. F., et al. (2018). This study views the crisis as evident in economics through low replication rates, advocating for better experimental standards and preregistration. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015

[4] Open Science Collaboration. (2015). The authors document the crisis in psychology via a large-scale replication effort, showing low reproducibility and calling for open science practices. Estimating the reproducibility of psychological science

[5] Begley, C. G., & Ioannidis, J. P. A. (2015). They express that the crisis extends to biomedicine with poor replication in preclinical research, urging systemic changes like improved training and validation. Reproducibility in Science: Improving the Standard for Basic and Preclinical Research