What is a reliable test?
- the precision with which the test score measures achievement
What is reliability
Reliability formula
x=T+e
x- the observed score
T- the true score
e- the error
The Four Assumptions of Classical Test Score Theory
Standard Error of measurement (SEM)
-works out how much measurement error we have by working out how much on average, an observed score on our test differs from the true score
(standard deviation)
Problems with Classical Test Score Theory
Domain Sampling Model
4 Types of reliability
Test-retest reliability
Issues with test-retest
Parallel forms reliability
Ways to change test in parallel forms reliability
Issues with parallel forms reliability
Internal Consistency
-do different items within a test all measure the same thing, to an extent?
Examples of internal consistency tests
Split-half reliability
- total scores for each half are correlated
advantage of split-half reliability
-only need one test (dont need 2 forms)
challenge of split-half reliability
-how to divide the test into equivalent halves
issues with split-half reliability
Spearman-Brown formula
is the solution to the problem for split tests- that each half will have reduced reliability compared to the total test)
Coefficient/Cronbach’s Alpha
What do coefficients results mean? Cronbach’s A
0- no consistency in measurement
1- perfect consistency in measurement
what level of reliability is appropriate? Cronbach’s A
Cronbach’s alpha can be affected by