define validity
the degree to which available evidence supports inferences made from scores on selection procedures; how well are you measuring what you’re claiming to measure?
how is validity applied in the context of selection?
we want to know how well a predictor (i.e., test) is related to criteria (i.e., performance)
Discuss the relationship between reliability and validity
list sources of validity evidence
Content response processes internal structure relations w/ other variables decision consequences
content validity: describe what it is
Content validity is demonstrated to the extent that the content of the assessment process reflects the important performance domains of the job. Validity is thus built into the assessment procedures.
Content validation methods focus on content relevance and content representation. Content relevance is the extent to which the tasks of the test or assessment are relevant to the target domain. Representativeness refers to the extent to which the test items are proportional to the facets of the domain. Content relevance and representativeness are commonly assessed using subject matter expert ratings.
many companies rely heavily on content validation for various reasons.
describe the steps in content validity
2. examination plan
challenges with conducting criterion related validity studies
sample size: small sample sizes are often due to many job classifications having only a small number of employees
range restriction: the selection process has already restricted the org’s workforce to a certain level of performance (the scores of hired people don’t reflect scores of entire applicant group)
criterion measures: good criterion measures are often not available or just plain bad
advantages of content validity
disadvantages of content validity
content validity: direct vs indirect measures
for content validity, there is a hierarchy of assessment evidence. we can think of this evidence as being higher when direct methods are used and lower when indirect methods are used. For example, in measuring keyboard ability, a highly content valid keyboarding test would replicate the most important job tasks (text entry and data entry). The test would be a direct measure of keyboarding ability. Two indirect measures or indicators of keyboarding ability are completion of a high school keyboarding course, and having keyboarding work experience. The indirect measures do not inform us about the current keyboarding proficiency of the subject.
List and describe the guidelines for interpretting correlations according to the U.S. Dept. of Labor (note that setting an arbitrary value of a validity coefficient for determining whether a selection procedure is useful is not a wise practice).
above .35 = very beneficial
.21-.35 = likely to be useful
.11-.20 = depends on circumstance
less than .11 = unlikely to be useful
validity of work sample tests
.33
Roth, Bobko, & McFarland, 2005
Correlated w/ supervisory ratings of job performance.
validity of structured interviews
.51
Not sure where this is from.
validity of unstructured interviews
.38
Not sure where this is from.
validity of job knowledge tests
.48. job knowledge tests which have high job specificity, have higher levels of criterion-related validity. Job knowledge tests can not be used for entry-level jobs. They are not appropriate for use with jobs where no prior experience is required or where no prior job-specific training is required
Not sure where this is from.
validity of behavioral consistency T&E methods
.45; This method is based on the principle that the best predictor of future performance is past performance. Applicants describe their past achievements and the achievements are rated by subject matter experts.
McDaniel et al 1988
validity of self report T&E methods
.15-.20; few studies available
Not sure where this is from.
validity of years of experience
.18; Years of experience are a very indirect measure of ability.
Not sure where this is from.
assessment tests vs. GMA
Review of the meta-analysis results, and comparison to the list of direct and indirect assessment methods, leads to the conclusion that, except for general ability tests, the predictive value of assessment methods reflects the extent to which they more directly assess applicant competencies.
the most direct assessment methods are?
work sample tests, job knowledge tests, and structured interviews
how does content validity relate to criterion validity ?
The data indicate that direct assessment methods have higher levels of criterion-related validity than indirect assessment methods. This is evidence that the stronger the content validity evidence supporting an assessment method, the more likely it is that the assessment method will have a high level of criterion-related validity. In the author’s view, the meta-analysis criterion-related validity data provides support for the content validation model.
LOOK INTO/FIND CITATION
how to minimize adverse impact while maximizing validity
assess full range of KSAs
make sure predictor test verbal requirements dont exceed verbal requirements of the job
both of these things can be done with job knowledge tests
Research designs for validity studies
concurrent: Test given to group of employees already on job, then correlated with measure of employees’ performance
Weaker than predictive but more practical, because of the homogeneity in performance scores; Problem: Range restriction
Predictive:
Test administered to group of job applicants who are going to be hired and then compared with future measure of job performance
validity generalization
Research indicates a valid test for job in one organization is also valid for the same job in another organization
Two building blocks for validity generalization