While selecting the measurement process, consider the:
The Evaluation Process (steps)
Purpose of Evaluating Talent Development Solutions
Benefits of evaluating talent development solutions
Ralph Tyler’s Goal Attainment Method
Tyler’s design process incorporates evaluation based on objectives. Primarily used for curriculum design.
Tyler’s model poses four questions:
Formative Evaluation
Formative evaluation occurs throughout the design of any talent development solution.
Summative Evaluation
− Summative evaluation occurs after a talent development solution has been delivered.
Program Evaluation
− Program evaluation is the systematic assessment of program results and, if possible, the assessment of how the program caused them.
− Results may occur at several levels: reaction to the program, what was learned, what was transferred to the job, and the impact on the organization.
Learning Transfer Evaluation
Learning transfer evaluation measures the learner’s ability to use what they’ve learned on the job.
The Brinkerhoff Success Case Method (SCM)
− The SCM involves identifying the most and least successful cases in a program and examining them in detail.
− Key steps in this method are:
- focusing and planning a success case study
- creating an “impact model” that defines what success should look like
- designing and implementing a survey to search for best and worst cases
- interviewing and documenting success cases
- communicating findings, conclusions, and recommendations.
Balanced Scorecard Approach
− The balanced scorecard approach is a way for organizations to evaluate effectiveness with more than financial measures.
− This model consists of measuring effectiveness from four perspectives
- The customer perspective
- The innovation and learning perspective
o The internal business perspective
o The financial perspective
Steps to create data collection tools
To develop evaluation instruments, talent development professionals should determine:
Construct validity
Construct validity evaluates whether a measurement tool really represents the thing we are interested in measuring. It’s central to establishing the overall validity of a method.
(A construct refers to a concept or characteristic that can’t be directly observed, but can be measured by observing other indicators that are associated with it.)
Content validity
Content validity assesses whether a test is representative of all aspects of the construct.
To produce valid results, the content of a test, survey or measurement method must cover all relevant parts of the subject it aims to measure. If some aspects are missing from the measurement (or if irrelevant aspects are included), the validity is threatened.
Criterion Validity
Criterion validity evaluates how well a test can predict a concrete outcome, or how well the results of your test approximate the results of another test.
Concurrent Validity
Concurrent validity is the extent to which an instrument agrees with the results of other instruments administered at approximately the same time to measure the same characteristics.
Predictive Validity
Predictive validity is the extent to which an instrument can predict future behaviors or results.
Split-half reliability
Split-half reliability is a way to test reliability in which one test is split into two shorter ones.
Test-retest check of reliability
Test–retest check of reliability is an approach in which the same test is administered twice to the same group of people. The scores are then compared.
(timing is a critical issue in a test–retest check: If the period between tests is too short, a participant could simply remember the questions.)
Reliability
Reliability is the ability of the same measurement to produce consistent results over time.
Considerations when creating surveys, questionnaires, or interview evaluation instruments
Types of Data Collection Tools
− Surveys and questionnaires − Analytics from technology platforms − Examinations, assessments, and tests − Self-evaluations − Simulations and observations − Archival or extant data
Steps to developing an evaluation strategy
− Know how to design research methods.
− Determine which results to measure and how to measure.
− Identify the business drivers and performance needs.
− Choose the evaluation methods.
Analysis Methods
− Return on investment (ROI) analysis − Cost-benefit analysis − Benefit-cost ratio − Utility analysis − Forecasting