Successfully reported this slideshow.

Selective publication

270 vues

Publié le

  • Soyez le premier à commenter

  • Soyez le premier à aimer ceci

Selective publication

  1. 1. Erick H. Turner, M.D. Dept. of Psychiatry, OHSU Center for Ethics in Health Care, OHSU Staff Psychiatrist, Portland VA Med Center Former FDA Reviewer Selective Publication & Drug Efficacy Don't Believe Everything You (Don't) Read
  2. 2. Selective publication • Definition – When publication of research results depends on their nature and direction • Who’s responsible? Negative or inconclusive results less likely to be either… • Submitted by authors • Accepted by editors and reviewers • Synonyms – The file drawer problem – Publication bias • Can apply to: – Entire trials – Outcomes within those trials
  3. 3. Consequences of publication bias • Efficacy – “Grade inflation” -- Lake Wobegone effect -- all the treatments are above average • Safety – Harms concealed or played down – Vioxx, Avandia, many more • Risk-benefit ratio skewed – Overenthusiasm about treatment options • Scientific method undermined – Evidence b(i)ased medicine – It’s not the truth!
  4. 4. My disillusionment • Before FDA – Journal articles reported only positive findings • At FDA – Lots of negative trials – “What gives, boss?” – Learned FDA review documents publicly available • Older drugs -- “paper FOIA” • Newer drugs -- on FDA website
  5. 5. Overview of our study N Engl J Med. 2008 Jan 17;358(3):252-60 • FDA reviews • 12 antidepressants starting with Prozac • Cohort of 74 trials registered with FDA • Track each study into published literature • Two questions: – Was the study published? – If published, how did the published results compare with the FDA results?
  6. 6. Literature search • Databases searched – By study authors • PubMed • Cochrane Central Register of Controlled Trials – By research librarian (AH) • Ovid Medline • Cochrane • Search: drug name in title; ‘placebo’ in title or abstract • Excluded – articles not focused to drug’s efficacy – articles “bundling” multiple trials (sought ‘stand-alone’ articles) – conference abstracts • References of review articles • Matching parameters – Drugs used (incl active control), doses used, N by dose group, duration, PI name
  7. 7. Positive vs. Not-Positive Studies According to peer- reviewed Journals articles
  8. 8. Positive vs. Not-Positive Studies
  9. 9. How were the FDA-positive studies published?
  10. 10. Differential handling of positive vs. negative studies
  11. 11. Breakdown by drug -- Journal View
  12. 12. Breakdown by drug -- FDA view
  13. 13. 11 Spun Trials
  14. 14. 11 Pigs with Lipstick
  15. 15. Various colors of lipstick used (A few of the non-primary methods used) • Handling of dropouts (6 cases) – “Because of the slow onset of action, endpoint [LOCF] analyses were considered not appropriate for this dataset.” • “Efficacy subset” of patients meeting additional criteria (4 cases) • Single site chosen from multicenter studies (2 cases) • Inclusion of FDA-disallowed site (1 case) Detailed by study in web appendix to article
  16. 16. Limitations of P value approach • P values tell you how likely the effect is to be zero or nonzero – NS does not mean no effect • Jacob Cohen article: The Earth is Round, P<.05 – NS: drug often still numerically better than placebo • No information on how big the effect is – Drug with P<.0001 not “stronger” than P<.05 • Affected by sample size (N) – Small N + big effect  nonsignificant – Big N + trivial effect  significant
  17. 17. Effect size approach • Tells magnitude of effect – More relevant to clinical significance • Other advantages – Not affected by N – Can also give info on statistical significance (like P) • Confidence interval around effect size – Meta-analysis let you combine studies into a “big picture”
  18. 18. Effect size Point estimates only (no CIs)
  19. 19. Is this just an issue with antidepressants?
  20. 20. Selective publication is widespread • At least 40 indications throughout medicine • McGauran et al. Trials (2010) vol. 11 pp. 37 • Government-sponsored research (Not just industry!) – NIH-sponsored trials • Dickersin & Min, Online J Curr Clin Trials. 1993 Apr 28;Doc No 50 – Canadian Institutes of Health Research • Chan, et. al. CMAJ. 2004 Sep 28;171(7):735-40. • Basic research • Liebeskind et al. Neurology (2006) vol. 67 (6) pp. 973-9 • CAM trials • Ernst E. J Clin Epidemiol. 2007 Nov;60(11):1093-4. • Social science – Qualitative research • Petticrew M, et. al. J Epidemiol Community Health. 2008 Jun;62(6):552-4. – Economics research • Doucouliagos. Journal of Economic Surveys (2005) – Psychotherapy research…
  21. 21. Psychotherapy effect size vs. study quality • Overall ES = 0.68 – N=115 studies  178 tx conditions • Divided by study quality – 8 criteria – treatment delivery  optimize treatment efficacy – methodological sources of bias • Low quality: ES = 0.74 (0.65-0.84) • High quality: ES = 0.22 (0.13-0.31) Cuijpers, et.al., Psychological Medicine - Psychol. Med. (2010) vol. 40 (02) pp. 211
  22. 22. EBM or Sa-EBM*? which evidence? whose evidence? * Selectively available evidence …
  23. 23. How can this be? • Is the hallowed peer-reviewed system not everything we’ve been told? • Why the discrepancies between the FDA and journal articles? – Peer review  loopholes – FDA  loopholes closed
  24. 24. Information flow at the FDA Before the study • FDA gets sponsor’s full protocol up front – Before 1st patient is recruited – FDA learns that a study is to be done • Sponsor can’t later pretend it never happened – FDA knows “devilish” details of analysis, etc. • Can “pull rank” on them re plan • Primary outcomes declared -- basis for “win” or “loss” • Secondary outcomes = exploratory only
  25. 25. Information flow at the FDA After the study • Receives study reports – All studies – Summary statistics – Raw data • Review original protocol + methods • Re-analysis – prespecified methods  replication • Written up in review document – Review posted on web if drug approved
  26. 26. Information flow - peer-reviewed system • Conceive study • Write protocol (for your eyes only) – Plan for many scales, etc. • Danger of multiplicity (shotguns seldom miss) – One should be primary (the hypothesis) • Get regulatory approval • Collect data • Analyze data – Try out alternate methods p.r.n. – Torture data until it confesses • Then…
  27. 27. Write it up – or not! • If you choose to (nonpublication) • When you choose to (delayed publication) • How you choose to (outcome reporting bias) – Few journals ask for original protocol – “Honor system” – System invites HARKing
  28. 28. System invites HARKing Selective reporting of outcomes within studies Hypothesizing After the Results are Known • Decide–enlightened by the data–which methods you used were flawed / valid • Decide what results do / don’t merit emphasis in the manuscript • Look like a genius! • Like betting on a horse race after the race * Kerr NL. HARKing: Hypothesizing After the Results are Known. Personality and Social Psychology Review. 1998 1998;2(3):196-217.
  29. 29. Incentives for HARKing & selective publication • Financial – Increase drug sales • Academic / career – Tenure – Reputation as “successful” – Grants • Pilot data must “show promise” to impress review committees to invest in proposed study • Psychological – Hindsight bias (“I knew it all along!”) – Need to be right (no one wants to be a ‘loser’) – Narrative flow / satisfying story line • Hot stuff (journals & press) – Social proof • “Everybody has always done it this way, so it must be fine.” • And no real disincentives
  30. 30. Problems with the Current Peer Review System • The reader (“consumer”): – Trusts the system – Does not see unpublished studies – Assumes emphasized results are consistent with original hypothesis • Journals – Editors / reviewers unaware of original hypothesis – Usually do NOT get and read the original protocol  “honor system” – Beholden to their impact factors • Publish studies ‘likely to change clinical practice’ • Maybe they like coverage by major media?
  31. 31. Do they really mean this?
  32. 32. Reviewers biased by whether results are significant • In-press study (presented at Peer Review Congress 2009) • Phony manuscripts circulated for review • Methodological errors sprinkled through all • Two versions of results – Significant results  errors missed  methods fine – Nonsignif. Results  errors caught  methods flawed
  33. 33. Policy fixes and limitations • Registration – ICMJE (2005) - must register trial if you want to publish – But publishing is optional! – Can’t verify results • Declaring primary outcomes is optional • Primaries can be vague (“depression rating scale”)  HARKing • Posting of results by pharma companies – Only go back so far (often 2004) – Awareness? Doctors still rely on peer-reviewed literature – Not a disinterested 3rd party • FDAAA of 2007 – Registry and results database (clinicaltrials.gov) – Only affects future drugs! • Today’s drugs grandfathered in & not covered
  34. 34. Public Library of Science Medicine, Oct 2004 A still-underutilized goldmine of clinical trial data
  35. 35. Proposal: Enhance freedom of information • Make FDA reviews easily accessible – Newer ones online but not user-friendly – Older ones available only via “paper FOIA” • Redundant requests – inefficiency – Make it free • Existing “services” charge $$ for what taxpayer has paid for • Too much work for one research group – Others could contribute FDA reviews obtained • Set the record straight for all drug classes
  36. 36. Model resource: DIDA Drug Industry Document Archive • UCSF library hosts • Neurontin litigation documents • Searchable – OCR’d – Discoverable by Google
  37. 37. Our version of DIDA (nascent and unfunded) Starting with FDA reviews that are not available for download from FDA website http://drl.ohsu.edu/cdm4/index_fdadrug.php
  38. 38. Conclusions / Recap • Don’t believe everything you DON’T read – How many trials are you NOT seeing? • Don’t believe everything you DO read – Could be spun – methods in paper vs. protocol • Peer-reviewed system has various loopholes – Cherry-picking by all parties involved • Evidence-b(i)ased medicine? • Need more unbiased data –  access and analysis

×