Chapter 6 - Validity Flashcards

(33 cards)

1
Q

What is validity?

A

A judgment or estimate of how well a test measures what it’s supposed to measure in a particular context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the relationship between validity and reliability?

A

Reliability is required by not sufficient for validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is validation? Who plays a role in it?

A

The process of gathering and evaluating evidence about validity
This can be done by both test developers and test users

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is local validation?

A

When test users aim to determine the validity of a test within their own local settings or conditions, using their own group of test takers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the 3 main categories of validity (from easiest to hardest to establish)?

A

Content –> Criterion-Related –> Construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Content Validity?

A

How well a test samples behaviors that are representative of the broader set of behaviors that it’s designed to measure

In other words, it measures how well test items/topics adequately represent the content that should be included based on the operational definition being used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Face Validity?

A

A form of content validity, it is a judgment concerning how relevant the test items appear to be on the face of it
This is the simplest form of validity to prove, but some tests are intentionally designed to have low levels of it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a Test Blueprint?

A

Part of the process of creating content validity, it is a plan regarding the types of information covered by the items, the number of items tapping into each area of coverage, and the organization of the items in the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do we typically establish content validity?

A

Expert panels: obtain expert ratings on the degree of item importance and scrutinize what is missing from the measure
Focus Groups: having the general population react to the measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Criterion-Related Validity?

A

Evaluates the relationship between scores obtained on one test and scores obtained on other tests or measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a criterion? What does it need in order to be adequate?

A

A standard against which a test or test score is evaluated
Must be…
1- relevant to the matter at hand
2- valid for the purpose that it’s being used
3- uncontaminated, as in it cannot be a part of the predictor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What ways can we establish criterion-related validity (in order from easiest to hardest to establish)?

A

1 - Concurrent validity
2 - Predictive validity
3 - Incremental validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Concurrent Validity?

A

The degree to which a test score is related to some criterion measure obtained at the SAME time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Predictive Validity?

A

The degree to which a test score predicts some criterion measure (or outcome) obtained at a FUTURE time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a Base Rate? How does it influence predictive validity?

A

The extent to which a phenomenon exists in the population
The less frequent it is, the more difficult it would be to show predictive validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the Hit Rate when establishing predictive validity? What are the two kinds?

A

The ability of the measure to accurately predict results

Two possibilities…
1- true positive
2- true negative

17
Q

What is the Miss Rate when establishing predictive validity? What are the two kinds?

A

Failure to identify something accurately

Two possibilities…
1- False positive or Type I error: saying that something will happen and then it does not
2- False negative or Type II error: saying that something will not happen and then it does

18
Q

What is the Validity Coefficient? What is it affected by?

A

A correlation coefficient between test scores and scores on the criterion measure
Affected by restriction or inflation of range

19
Q

What is Incremental Validity?

A

The degree to which an additional predictor explains something about the criterion measures that is not explained by predictors already being used
Essentially saying “this test predicts the criterion better than other tests”

20
Q

What is Construct Validity? How do we acquire evidence for it?

A

The ability of a test to measure a theorized construct. Essentially, does the measure map onto the THEORY the way we would expect it to (as in, do high scorers and low scorers behave as theorized?)

Establishing content and criterion-related validity will also provide evidence for construct validity, but it requires a little bit more than just that

21
Q

What are the different forms of evidence for construct validity? (7 things)

A

1- evidence of homogeneity
2- evidence of changes
3- evidence of retest changes
4- evidence from distinct groups
5- convergent evidence
6- discriminant evidence
7- factor analysis

22
Q

What is evidence of homogeneity?

A

How uniform a test is in measuring a single construct (established using evidence from internal reliability)

Ex: if I believe that my construct is narrow, then my internal consistency should be high

23
Q

What is evidence of changes?

A

How the construct changes over time in the way it’s expected to, established using evidence from test-retest reliability

24
Q

What is evidence of posttest or retest changes?

A

Test scores change as a result of some kind of experience or intervention between pretest and posttest, established using evidence from dynamic assessment

25
What is evidence from distinct groups?
Scores on the test vary in a predictable way based on membership in some group
26
What is Convergent Evidence? How is it distinct from concurrent validity?
Scores from a test correlate highly in the predicted direction with scores on older, more established tests designed to measure the same or a similar construct This is similar to concurrent validity, except we're looking at correlations with other measures rather than with criterion in general
27
What is Discriminant Evidence?
Scores from a test have little to no relationship with variables that it should NOT be correlated with Ex: in most cases, test scores should NOT be correlated with measures of social desirability
28
What is Factor Analysis?
A family of statistical tests that looks at how items are correlated with one another and develop into factors or groupings A test should only measure ONE common factor unless it's an inventory or intentionally divided into different domains
29
What is Bias?
A factor inherent in a test that systematically prevents accurate, impartial measurement
30
What is Rating Error?
A judgment resulting from the intentional or unintentional misuse of a rating scale Raters may be either too lenient, too severe, or reluctant to give ratings at either extreme (i.e. central tendency error)
31
What is the halo effect?
The tendency to give a particular person a higher rating than he or she objectively deserves because of an overall favorable impression of the person
32
What is fairness?
The extent to which a test is used in an impartial, just, and equitable way
33
Can a test be unbiased and still be unfair? Why?
Unbiased tests can still be unfair due to random error or aspects of the test's administration/application (rather than aspects of the test itself)