How do researchers design studies to prevent internal validity threats?
The 6 threats to one-group, pretest/posttest designs can be ruled out if an experimenter conducts the study using a comparison group (either a posttest-only design or a pretest/
posttest design).
How do you interrogate an experiment with a null result to decide whether the study design obscured an effect or if there is truly no relationship?
Obscuring factors can be sorted into two categories:
1. Not enough between-groups difference (from weak manipulations, insensitive measures, ceiling or floor effects, or a design confound acting in reverse.)
2. Too much within-groups variance (from measurement error, irrelevant indi-
vidual differences, or situation noise.)
Too much in-group variance can be counteracted by using multiple measurements, more precise measurements, within-groups designs, large samples, and very controlled experimental
environments.
What are the 6 threats to internal validity that are especially relevant to the one-group, pretest/posttest design?
What are the 3 potential internal validity threats to any experiment?
What is the really bad experiment?
The one-group, pretest/posttest design is like a pretest/posttest design has no comparison groups
What are the supgroups of selection threats?