Odds =
(FROM PROB)
probability / 1- probability
Cluster sampling
divide population into clusters, often basis of geography
Convenience sampling
who is easily accessible
Quota sampling
fixed number of unit in each of a number of categories
Snowball sampling
Asking people to pass on details to others
Random sampling
Everyone in population has equal chance pf being selected
stratified sampling
divide population into groups on basis of some suspected confounding characteristic
systematic sampling
choosing every nth item from a list. beginning random point
null hypothesis
alternate hypothesis
opposite you want to disprove e.g. if you want to prove A and B are different, say that no difference exists between A and B.
If null gets rejected = try to prove often actual research question
Error
Type 1 error = alpha
type 2 error = beta
Incorrect rejection of null hypothesis = FALSE POSITIVE / chance finding / p-value (alpha)
Type 2 - FALSE NEGATIVE most likely due to small sample size or large variance
traditionally
a = 5%. B = 20%, power is 80%
power =
ability of a study to detect a difference between 2 groups if such difference exists
Effect size
Difference in outcomes between intervention and controls, divided by standard deviation
= IS a measure of the difference in point estimates
Reliability tested by
Test-retest correlation (long enough to avoid practice effect, short enough to mean thing doesnt change enough eg patients depressive state) often 2-12 days in psychiatry
Cronbachs alpha
measures internal consistency of a test = correlating each item with total score and averaging correlation coefficient
face validity
construct validity
face = The extent to which a test appears to measure what it claims to measure, based on a superficial assessment.
The extent to which a test actually measures the theoretical construct it’s intended to measure. eg does becks depression inventory measure depression actually
beyond chance agreement = kappa
kappa indicates level of agreement that cbe expected beyond chance
useful on agreement of categorical valuables eg presence/ lack of diagnosis
Calculating kappa
Kappa = observed agreement beyond chance / max agreement beyond chance
OR
kappa = (observed agreement - agreement expected by chance / 100% - agreement expected by chance
intention to treat analysis
Participants counted in groups to which they were initially randomised , regardless of whether they were compliant with allocation. more realistic real life + more generalisable
N of 1 trial
doctor and patient do own personal test eg which medication best.
not generalisable
pharamcist blinds to both dr and patient
how to control confounding?
Matching = make confounders equally distributed
Restriction = avoid including group with significant **confouner influence **
Randomisation = helps distribute confounders
Types of bias = selection bias
Berkonson
neyman
response
unmasking
Verification bias
Berkson = admission bias (Birkinstocks on in hospital)
neyman = incidence prevalence bias - eg if unmasks something that correlated to incidence (eg sepsis), will be low prevelance if measuing this due to deaths (neyman ur a deadman)
response = peolpe who resond are different people by nature than those who dont
unmasking - risk factor unmasks rather than causes an event
verification bias (work up bias)
Emic perspective
ETIC perspective
emic = MINE = internal observer
etic = outside observer
Positive skew
negative skew
pulled +ve –> leg pulled to the R
pulled -ve–> leg pulled left
variance
sum of squared differences of individual observations from mean / number of observations