Clinically Relevant Statistics Flashcards

(71 cards)

1
Q

Define mean

A
  • average of all scores
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define median

A
  • after ranking scores, the score in the middle
  • separates the higher half from the lower half
  • helps identify major outliers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define mode

A
  • score that occurs the most frequently
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain regression toward the mean

A
  • if you take averages of the same studies over and over and over you should get a regression closer to the actual mean
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Independent variable

A
  • manipulated via researcher
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Dependent variable

A
  • what is measured
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define validity

A
  • truthfulness, meaningfulness, and or accuracy of study results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

2 categories of validity

A
  • external validity
  • internal validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define external validity

A
  • generalizability of results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define internal validity

A
  • controlled by the study design
  • ie blinding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the 5 types of validity

A
  • face
  • content
  • concurrent
  • predictive
  • construct
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define face validity

A
  • does a specific measure actually measure what it was designed to measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define content validity

A
  • does the measure represent all constructs of the measure that it is intended to
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define concurrent validity

A
  • correlates with gold standard
  • how well a new assessment aligns with gold standard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Define predictive validity

A
  • test can be used to predict a future score or outcomes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define construct validity

A
  • how well measure captures defined entity
  • how well measure captures the theoretical concept it’s meant to measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How is validity typically measured

A
  • correlation (closer to 1 is better on a scale of 0-1)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Interval and ratio data use what statistical analysis

A

pearson

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

ordinal uses what statistical analysis

A

spearman rank

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

nominal uses what statistical analysis

A

phi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

define reliability

A
  • consistency of a specific measure
  • able to produce consistent repeated measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what are the 2 types of reliability

A
  • intrarater
  • interrater
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Define intrarater reliability

A
  • consistency and reliability of a single rater
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Define interrater reliability

A
  • consistency and reliability of multiple raters
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Measures of reliability are expressed as what
- intraclass correlation coefficient (ICC)
26
ICC requires what type of data
nominal or ordinal
27
What is considered a good ICC
greater than 0.75
28
What is moderate ICC
0.51--.75
29
What is low ICC
less than 0.50
30
What is agreement
- do the measure and say whether it is negative or positive - percent agreement - the most basic
31
What is the kappa statistic
- considers the agreement by chance to help factor chance out
32
almost perfect agreement between raters is what
0.81-1.0
33
substantial agreement between raters is what
0.61-0.80
34
Moderate agreement between raters is
0.41-0.60
35
fair agreement between raters is
0.21-0.40
36
Slight agreement between raters is
0.01-0.20
37
poor/equal to change agreement between raters is
less than or equal to 0
38
define precision
- absolute reliability
39
SEM is derived from what
- ICC and standard deviation
39
What is precision expressed as
- standard error of measurement (SEM) which provides insight into meaningful changes
40
Define minimal detectable change (MDC)
- smallest amount of change an instrument can accurately measure - changes must exceed MDC to be beyond measurement error (if below change is too small to say real change occurred) - no context, just that change is beyond error
41
Define minimal clinically important difference (MCID)
- smallest difference that clinicians and patients would care about (what is the change that's actually meaningful clinically)
42
What does MCID tell us
- smallest change needed for it to be clinically significant
43
Define ceiling effect
- instrument does not register a further increase in score for high scoring individuals - scores all near top meaning it was too easy so you can't detect changes or differences
44
Define floor effect
instrument does not register a further decrease in score for low scoring individuals - scores all near the bottom because it was too hard and can't see real differences
45
Things that help look at statistical significance
- p-value - precision or confidence intervals - type 1 and 2 error - power
46
Things that help look at clinical significance
- how large the difference was - MCID - effect size - sensitivity and specificity - likelihood ratios - number needed to treat
47
Explain p-value
- sets threshold for type 1 error - 0.05 allows for 5% chance the difference is due to error/chance - no clinical relevance
48
Explain confidence intervals
- range of scores that should contain the population - expressed as a percentage of how sure you are that the population is between that range
49
Define type I error
- reject the null when it was true - saying there is a difference when there is NOT
50
Define type II error
- do not reject the null but it is actually false - saying there is no difference but there is a difference
51
What 3 things affect statistical power
- significance (a) - effect size - sample size
52
How does sample size affect power
- larger sample size has greater influence on power - it is able to detect smaller differences in groups
53
What does effect size do
- determines magnitude of treatment effect - allows for normalized comparison of results - accounts for variation
54
Most common effect size variable
- cohens d
55
Small effect size is
0.20
56
moderate effect size is
0.50
57
large effect size is
0.80
58
What would a negative effect size mean
- indicate a decrease - ie pain goes from 8 to a 2
59
What is gold standard
- zero false positives and false negatives - what we compare tests to
60
Define sensitivity
- helps determine who has it - so when we get a negative test we can confidently rule the condition out - snOUT
61
Define specificity
- helps determine who doesn't have it - positive test means we can confidently rule IN the condition - spIN
62
What are the limitations to sensitivity and specificity
- doesn't really tell us a whole lot about the clinical sig - does not change the probability
63
What is likelihood ratio
- incorporate both sensitivity and specificity - direct estimate of how much a test result will change the odds of having a condition
64
If a positive likelihood ratio is above 5 this means
- there was a strong or moderate increase in probability
65
If negative likelihood ratio is below 0.2 this means
- there was a strong or moderate decrease in probability
66
What is the pre-test probability
- prevalence rate (what is the chance they have the condition based on prevalence) - get them from published rates or personal experience
67
Patient history does what
- develops a working hypothesis
68
What is post-test probability
- likelihood that a patient has a specific disorder after considering pretest probability and the likelihood ratio for that specific test used - must determine if it has crossed treatment threshold
69
What is treatment threshold
- likelihood that the pt has the condition
70
What are the ottawa ankle rules
- reduce the unnecessary use of radiographs in the ER for foot and ankle injuries