Vše

Co hledáte?

Vše
Projekty
Výsledky výzkumu
Subjekty

Rychlé hledání

  • Projekty podpořené TA ČR
  • Významné projekty
  • Projekty s nejvyšší státní podporou
  • Aktuálně běžící projekty

Chytré vyhledávání

  • Takto najdu konkrétní +slovo
  • Takto z výsledků -slovo zcela vynechám
  • “Takto můžu najít celou frázi”

Estimating the false discovery risk of (randomized) clinical trials in medical journals based on published p-values

Identifikátory výsledku

  • Kód výsledku v IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985807%3A_____%2F23%3A00576101" target="_blank" >RIV/67985807:_____/23:00576101 - isvavai.cz</a>

  • Výsledek na webu

    <a href="https://dx.doi.org/10.1371/journal.pone.0290084" target="_blank" >https://dx.doi.org/10.1371/journal.pone.0290084</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1371/journal.pone.0290084" target="_blank" >10.1371/journal.pone.0290084</a>

Alternativní jazyky

  • Jazyk výsledku

    angličtina

  • Název v původním jazyce

    Estimating the false discovery risk of (randomized) clinical trials in medical journals based on published p-values

  • Popis výsledku v původním jazyce

    Many sciences are facing a crisis of confidence in published results [1]. Meta-scientific studies have revealed low replication rates, estimates of low statistical power, and even reports of scientific misconduct [2]. Based on assumptions about the percentage of true hypotheses and statistical power to test them, Ioannidis [3] arrived at the conclusion that most published results are false. It has proven difficult to test this prediction. First, large scale replication attempts [4–6] are inherently expensive and focus only on a limited set of pre-selected findings [7]. Second, studies of meta-analyses have revealed that power is low, but rarely lead to the conclusion that the null-hypothesis is true [8–16] (but see [17, 18]). So far, the most promising attempt to estimate the false discovery rate has been Jager and Leek’s [19] investigation of p-values in medical journals. They extracted 5,322 p-values from abstracts of medical journals and found that only 14% of the statistically significant results may be false-positives. This is a sizeable percentage, but it is inconsistent with the claim that most published results are false. Although Jager and Leek’s article was based on actual data, the article had a relatively minor impact on discussions about false-positive risks, possibly due to several limitations of their study [20–23]. One problem of their estimation method is the problem to distinguish between true null-hypotheses (i.e., the effect size is exactly zero) and studies with very low power in which the effect size may be very small, but not zero. To avoid this problem, we do not estimate the actual percentage of false positives, but rather the maximum percentage that is consistent with the data. We call this estimate the false discovery risk (FDR). To estimate the FDR, we take advantage of Sorić’s [24] insight that the false discovery risk is maximized when power to detect true effects is 100%. In this scenario, the false discovery rate is a simple function of the discovery rate (i.e., the percentage of significant results). Thus, the main challenge for empirical studies of FDR is to estimate the discovery rate when selection bias is present and inflates the observed discovery rate. To address the problem of selection bias, we developed a selection model that can provide an estimate of the discovery rate before selection for significance. The method section provides a detailed account of our method and compares it to Jager and Leek’s [19] approach.

  • Název v anglickém jazyce

    Estimating the false discovery risk of (randomized) clinical trials in medical journals based on published p-values

  • Popis výsledku anglicky

    Many sciences are facing a crisis of confidence in published results [1]. Meta-scientific studies have revealed low replication rates, estimates of low statistical power, and even reports of scientific misconduct [2]. Based on assumptions about the percentage of true hypotheses and statistical power to test them, Ioannidis [3] arrived at the conclusion that most published results are false. It has proven difficult to test this prediction. First, large scale replication attempts [4–6] are inherently expensive and focus only on a limited set of pre-selected findings [7]. Second, studies of meta-analyses have revealed that power is low, but rarely lead to the conclusion that the null-hypothesis is true [8–16] (but see [17, 18]). So far, the most promising attempt to estimate the false discovery rate has been Jager and Leek’s [19] investigation of p-values in medical journals. They extracted 5,322 p-values from abstracts of medical journals and found that only 14% of the statistically significant results may be false-positives. This is a sizeable percentage, but it is inconsistent with the claim that most published results are false. Although Jager and Leek’s article was based on actual data, the article had a relatively minor impact on discussions about false-positive risks, possibly due to several limitations of their study [20–23]. One problem of their estimation method is the problem to distinguish between true null-hypotheses (i.e., the effect size is exactly zero) and studies with very low power in which the effect size may be very small, but not zero. To avoid this problem, we do not estimate the actual percentage of false positives, but rather the maximum percentage that is consistent with the data. We call this estimate the false discovery risk (FDR). To estimate the FDR, we take advantage of Sorić’s [24] insight that the false discovery risk is maximized when power to detect true effects is 100%. In this scenario, the false discovery rate is a simple function of the discovery rate (i.e., the percentage of significant results). Thus, the main challenge for empirical studies of FDR is to estimate the discovery rate when selection bias is present and inflates the observed discovery rate. To address the problem of selection bias, we developed a selection model that can provide an estimate of the discovery rate before selection for significance. The method section provides a detailed account of our method and compares it to Jager and Leek’s [19] approach.

Klasifikace

  • Druh

    J<sub>SC</sub> - Článek v periodiku v databázi SCOPUS

  • CEP obor

  • OECD FORD obor

    10103 - Statistics and probability

Návaznosti výsledku

  • Projekt

  • Návaznosti

    I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace

Ostatní

  • Rok uplatnění

    2023

  • Kód důvěrnosti údajů

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Údaje specifické pro druh výsledku

  • Název periodika

    PLoS ONE

  • ISSN

    1932-6203

  • e-ISSN

    1932-6203

  • Svazek periodika

    18

  • Číslo periodika v rámci svazku

    8

  • Stát vydavatele periodika

    US - Spojené státy americké

  • Počet stran výsledku

    12

  • Strana od-do

    e0290084

  • Kód UT WoS článku

  • EID výsledku v databázi Scopus

    2-s2.0-85169230576