Test-Retest Reliability
Very important to talk about the Test-retest intervals. Like how long since we last took the test. (like I probably cannot take the cognitive tests Dr. Esmat used in her study for a long time if ever)
Testing effect. Did I do better because I learned the test or because I improved?
Carryover and testing effects.
Internal Consistency
A type of Reliability
Criterion-Related Validity
ICC
Relationship between validity and reliability

What is the validity counterpart to internal reliability?
Content Validity
Random Errors
Random errors: Due to chance and can affect scores in unpredictable ways
Three main Types of Reliability
Inter-rater reliability
Inter-rater reliability: variation between 2+ raters who measure same subject
Reliability coefficient: intraclass correlation coefficient (ICC)
Three Sources of Measurement Error
Validity: Convergence and Discrimination
Convergent validity: two measures believed to reflect the same underlying phenomenon will have similar results or correlate highly
Discriminant validity: different results (low correlations) are expected from measures which are believed to assess different characteristics (Discriminant validity is when two measures that measure different things should not correlate well.)
Construct validity is related to convergent and divergent
Four Types of Measurement Validity
Systematic errors:
Systematic Errors: predictable errors of measurement
(Systematic error is a reliable error: For example, an uncalabrated scale is always the same amount off each time you measure it. It causes problems with validity but not reliability.)
Predictive Validity
Target test = new test that is untested so far
Responsiveness to Change
Intra-rater reliability
Intra-rater reliability: stability of data recorded by one individual over 2+ trials
Reliability coefficient: intraclass correlation coefficient (ICC)
Face Validity
Concurrent Validity
Concurrent is if the two measures are taken at the same time (gold standard and comparison test) when testing the criterion-related validity
Also used with a new or untested measure (target test) may be more efficient than a more established method – IntegNeuro
Measurement error
(include formula)
Relates to Reliability
Measurement error: difference between observed and true scores
(The formula she had was observed score = true score – error, but she keeps talking about that measurement error is the difference between observed score ant true scores (so I changed the formula)
Reliability
Validity
Reliability: How consistent and free from error is the instrument.
Validity: Does the test measure what it intends to measure
Reliability Coefficient (formula and interpretation)
Reliability coefficient= True score variance/(true score variance + error variance)
Zero is poor reliability (0% reliable à none is attributal to true difference)
One is the best reliability (100% reliable à all is attributed to true difference)
Reliability Coefficient
Estimate reliability based on statistical concept of variance
Reliability= how much of total variance is attributed to true differences between scores
Rater Reliability
Intra-rater reliability: stability of data recorded by one individual over 2+ trials
Inter-rater reliability: variation between 2+ raters who measure same subject
Reliability coefficient: intraclass correlation coefficient (ICC)
Intra-rater –> same person
Inter-rater –> between more than one person
Must be same subject being tested and same setup
Test-retest is looking more at the test (or the person taking the test), whereas Intra-rater reliability would be more about the person giving the test.
Rater bias: see how much they got before and try to get to the same value the second time. (overcome by blinding tester to the outcome of the test).
Regression Towards The Mean