System Validation
Ensures that measurement tools provide accurate and reliable data
Concurrent Validation
Assessment of a new system compared to an existing system while recording simultaneously
Accuracy
Ability to measure a quantity close to its true value approximated with ‘gold-standard’ system
Reliability
The extent to which measurements can be replicated; consistency or reproducibility
Precision
Spread of repeated measurements
Measuring accuracy
Agreement and Correlation
- Agreement is the more appropriate estimate of accuracy
Agreement
How closely the measurements of two systems match each other across a range of values. Assesses if measurement of the same variable by two different systems produce similar results
Pearson Correlation
Measures how strong pairs of variables are related and only interpreted if p-value is significant
How is pearson correlation a misleading measurement of agreement
Bland-Altman Analysis
Analysis with 95% limits of agreement (LOA) measures the agreement between two measurement systems instead of comparing the new system to a perfect system. This provides interchangeability between the reference and the new system.
how to calculate 95% LOA
95% LOA = bias +/- 1.96 x SD of the differences
Challenges with BA analysis
Priori acceptability limits
BA Analysis Steps
Reliability Theory
Forms of measurement Error
RANDOM ERROR: noise or unpredictable error - averages out to zero over time/ can be mitigated by multiple measurements
SYSTEMIC ERROR: Scores trend up or down over multiple measurements - directional and usually solved by simple addition or subtraction
Test re-test reliability
reliability equation
reliability = true variance/ (true variance + error variance)
Intraclass correlation coefficient
Quantifies reliability
- calculates an index that comprises the same variable measured on multiple occasions; withing a group/class
Interclass correlation coefficient
Correlation which assess if two variables from different classes are correlated
What are the 3 models of ICC
Model 1: Some participants are measured by different raters
Model 2: Participants are measured by the same raters, but can generalize the reliability to other raters of the same type
Model 3: Participants are measured by the same set of raters, but the raters are the only raters of interest
What are the two types of ICCs
Type 1: single measurement
Type k: mean of k measurement or trials
Types of challenges with test re-test reliability
Participant factors
Equipment and measurement factors
Environment and protocol factors
Participant factors of test re-test reliability
BIOLOGICAL VARIABILITY
- Natural fluctuations in trial-to-trial movement patterns
- Population considerations, health/skill status
- Solution: Repeated trails, increased sample size, warm-up, inclusion/exclusions criteria
LEARNING EFFECT
- Task adaptation through repeated trials
- Solution: Familiarization trials, randomization of conditions
FATIGUE, RECOVERY, AND PSYCHOLOGICAL:
- cumulative fatigue, motivation/boredom
- solution: spacing of trials, adequate rest between sessions, clear instructions