3. “It is everyone’s responsibility to
find out how to
ask questions systematically,
find answers from searching the
literature,
critically appraise the literature
and apply the results to
practice.”
Rinaldo Bellomo
4. “It is everyone’s responsibility to
find out how to
ask questions systematically,
find answers from searching the
literature,
critically appraise the literature
and apply the results to
practice.”
Rinaldo Bellomo
39. • “a little significance”
• “a definite trend is
evident”
• “a clear tendency”
• “almost achieved
significance”
40. • “a little significance”
• “a definite trend is
evident”
• “a clear tendency”
• “almost achieved
significance”
The data is
practically
meaningless
41.
42. • “In my experience”
• “In case after case”
• “In a series of cases”
• “It is generally believed
that..”
• “A highly significant
area for exploratory
study”
• Once
• Twice
• Three times
• A couple of others think
so too
• A totally useless topic in
my underpowered
study…….
When Chris asked me to do this I thought – are you for real Chris? Why all published research is wrong….Do you really think I can cover all of this in 20mins?
Sounds easy? Right? Great – but how…
Sounds easy? Right? Great – but how…
Where this all come from.
Open access article by John in 2005.
In here John – began the discussion about issues relating to problems with studies, analyses, their publication and reporting
Ioannidis randomly selected 50 ingredients
PubMed queries for recent studies that evaluated relationship with cancer.
Effect size shrunk with meta-analyses.
Ronald Fisher (a UK statistician) introduced the P value in the 1920s.
He did not mean it to be a definitive test
He intended it simply as an informal way to judge whether evidence was significant in the old-fashioned sense: i.e. that it was worth a second look.
The idea was to run an experiment, then see if the results were consistent with what random chance may produce.
Researchers would first set up a ‘null hypothesis” that they wanted to disprove – such as there being no correlation or no difference between 2 groups.
Next hey would play devils advocate and, assuming that this null hypothesis was in fact true, calculate the chances of getting results at least as extreme as what was actually observed.
This probability s the P value
Most look at this and say that there is a 1% chance of the findings/results being wrong.
But this is wrong.
The P value cannot say this – you also need to know the odds that the real effect was there in the first place.
The P value actually means that there is a 1% chance of results as extreme as these would occur when there is really no difference occurring in the experiment – e.g. that a drug has no effect.
Most look at this and say that there is a 1% chance of the findings/results being wrong.
But this is wrong.
The P value cannot say this – you also need to know the odds that the real effect was there in the first place.
The P value actually means that there is a 1% chance of results as extreme as these would occur when there is really no difference occurring in the experiment – e.g. that a drug has no effect.
Consider 1000 hypotheses, of which only 10% are true.
Random error makes a hypothesis that is really false – look true.
These = FP.
Medicine accepts that this happens in the order of 1 in 20 times, so in 1000 hypotheses (where 100 are TP) this means there are 45 FPs
If there are 100 TP and 45 FP then almost a third of the results that look positive would be wrong.
But it is worse than that.
There is another type of error – and that is False negatives. Where there is a true effect, but it is misinterpreted as a false one.
Say 20% of the true finding fail to be detected (and this figure is difficult to quantity). That is in this case - 20 cases.
Now researches see 125 hypotheses as true (where 45 are not true)
A growing number cannot be replicated, because many studies may have not found a real result in the first place.
Perhaps we should only be looking at P values <0.005
A growing number cannot be replicated, because many studies may have not found a real result in the first place.
Perhaps we should only be looking at P values <0.005
Start at the beginning:
Studies themselves
Interesting but not really relevant?
Steroids for traumatic brain injury
Others Medical example – Investigations into H2 blockers in PUD
Before realised that PUD related to H bactor pylori
Selective
ACS – pts over 75 excluded from most studies
Underpowered
The National Registry of Myocardial Infarction (NRMI)
Large, US observational registry
Collects baseline data, procedural, therapeutic and outcome data on discharge
>1million NSTEACs
More lilkey that the findigns are false with Small size
More lilkey that the findigns are false
More lilkey that the findigns are false
Not clinically relevant endpoints
(Ad/Vasoproessin/ high dose Ad)
Since publication in 1990, results from the National Acute Spinal Cord Injury Study II (NASCIS II) trial have changed the way patients suffering an acute spinal cord injury (SCI) are treated.
though well-designed and well-executed, both NASCIS II and III failed to demonstrate improvement in primary outcome measures as a result of the administration of methylprednisolone. Post-hoc comparisons, although interesting, did not provide compelling data to establish a new standard of care in the treatment of patients with acute SCI.
Evidence of the drug's efficacy and impact is weak and may only represent random events.
Renamed
In the late 1940s before there was a polio vaccine Health authorities noted that polio cases increased with ice cream and soft drink consumption.
Eliminating such treats were part of the advice given to combat the spread of the diseases.
Polio was more common in summer, when people eat more icecream
Hence a Correlation vs causation
Pharma – All bad
75% of all US research funded by Pharma
Pharma – All bad
75% of all US research funded by Pharma
Surgeon – cannot recommend surgery
Interventional cardiologist - stenting
UQ RORT:
One of Australia's leading universities is investigating new concerns of possible academic misconduct by two former academics.
The University of Queensland's Dr Caroline Barwood and Professor Bruce Murdoch published a peer-reviewed paper in the prestigious European Journal of Neurology, heralding a major breakthrough in the treatment of Parkinson's disease.
theuniversity made the unusual admission that it could find no data or evidence that the research was ever conducted.
Before the article was retracted, the study's apparent success led to a number of grants.
Ten months after the allegations of academic misconduct were first raised and one month after the investigation was referred to the CMC, the university accepted part of a $300,000, five-year research fellowship on behalf of Dr Barwood.
Don’t believe open access – where if I pay the $ to many journal they will simply accept my paper!
Neg trials hard to publish
High impact journals – ‘1st’ of something – often exaggerated.
Where we end up at is this>>>
What are we doing now that is harmful to our patients?
Don’t just skim article
Not just abstract and conclusions
Learn a little about stats
Don’t be fooled by high quality journal
Know/review the literature/topic. Not just article
I don’t recommend that you go home tonight and try a “booty call” with your partner
Perhaps read the article and find out the details.
Consistent and transparent
Be aware of your own biases – especially confirmation biases.
Stop searching for information to confirm your own views. Read broadly.
Don’t believe a single articles findings – look for bodies of work around a topic.
Who do you believe and what do you believe?
Do you believe who speaks loudest?
Sit and contemplate your position…………Put the patient (not the patients leg) at the forefront of your focus.
Be sceptical!