How do we know what is effective?
1, Authority figures tell us - but they may be wrong
What are the diff types of intervention studies?
What are the key features of a randomised controlled trial?
What are the different sections of CASP checklist for RCT?
Section A: Are the results of the trial valid?
Section B: What are the results?
Section C: Will the results help locally?
What do outcome measures in RCTs need to be?
What needs to be measured to make sure that the choice of outcome measure is relevant to patients
-> If it is a proxy measure (indirect measure of outcome), are we confident that it is linked to the outcome of interest?
What are the ethical issues that can arise in RCTs?
What are the diff options for control in RCTs?
How can we randomise in RCTs?
Why do we randomise in RCTs?
To reduce the risk of bias and confounding
-> Randomisation should lead to known and unknown confounders being equally distributed (provided that the sample size is large enough)
What is the definition of a ‘confounder’?
“A variable, other than the one studies, that can cause or prevent the outcome of interest - it influences both the dependent and independent variable)
- Confounding variable must be related to both the exposure/ intervention and independently with the outcome
Requirements of confounding in RCTs?
In an RCT:
What is allocation concealment?
The person recruiting a participant to a trial does not know which group the participants will be allocated to
Why is allocation concealment important?
What are the possible problems that might occur with intervention groups in RCTs?
Why do you need to blind the participants in an RCT?
Why do you need to blind the HCPs providing their care?
Why do you need to blinds the researchers and statisticians?
What are the diff degrees of blinding?
How do we deal with contamination/ crossover in RCTs?
ANALYSE BY INTENTION TO TREAT (ITT)
(if analyse according to treatment received, the sample is no longer randomised - allocation bias)
What is per protocol anaylsis
For an intervention where adherence has been low -> sometimes it is useful to estimate the effect of adhering to the interventions as specified in the trial protocol (the ‘per-protocol effect’)
What are the possible biases that could arise when assessing outcome ?
(Reduced by blinding is possible - outcomes must be assessed the same way for both study arms otherwise you get detection bias)
What is detection bias?
Systematic differences between groups in how outcomes are determined
What might affect follow up rates?