Hypothesis Testing
A statistical method that uses sample data to evaluate a hypothesis about a population
Hypothesis
A specific testable prediction about the association between two variables
What is the general goal of hypothesis testing?
To rule out chance (sampling error) as a plausible explanation for the results from a research study
BUt keep in mind that it doesn’t actually do that
Hypothesis testing uses probability to support one of two possible explanations for our findings
Hypothesis Test: Step 1
State your hypothesis about the population
* The null hypothesis, H0, predicts that the indepenent variable had no effect on the dependent variable
* The alternative hypothesis, H1, predicts that the independent variable did have an effect on the dependent variable
The Hypothesis Test: Step 2
Define the probability at which we think results indicate that there must be a true effect
* The a level establishes a criterion, or “cut-off,” for deciding if the null hypothesis is correct (typically a=.05)
* Critical region consists of outcomes very unlikely to occur if the null hypothesis is true (Defined by associations that are very unlikely to obtain (typically less than 5% chance) if no effect exists)
a (alpha) level
ha
Critical region
consists of outcomes very unlikely to occur if the null hypothesis is true
-Defined by associations that are very unlikely to obtain (typically less than 5% chance) if no effect exists
They Hypothesis Test: Step 3
Test Statistic
forms a ratio comparing the difference between the population mean and sample mean versus the amount of difference we would expect without any treatment effect (standard error of the M)
ex. Z-score
The Hypothesis Test: Step 4
Compare data with the hypothesis predictions
-If the test statistic results are in the critical region, we conclude the difference is significant (an effect exists) and we reject the null hypothesis
-If the test statistic is not in the critical region, conclude that the difference is not significant (any difference is just due to chance), we fail to reject the null hypothesis
Errors in Hypothesis Tests
Type 1 Errors
Occur when the sample data indicate an effect when no effect actually exists; rejecting the null hypothesis when the null is true
* Caused by unusual, unrepresentative samples, falling in the critical region without any true effect
* Hypothesis tests are structured to make Type 1 errors unlikely
Type 2 Errors
Occur when the hypothesis test does not indicate an effect but in reality an effect does exist
* We fail to reject the null hypothesis even though it was actually false
* More likely with a small treatment effect or poor study design (sample size too small)
Directional test (or one-tailed test)
P-values (probability values)
P-value definition
Do not tell us the probability that we’re making a Type 1 Error
Interpreting p-value correctly
Ex. Imagine we’re testing a vaccine and out hypothesis test yields p=.05
What does a hypothesis test evaluate?
The statistical significance of the results from a research study
What influences hypothesis test?
The size of the treatment effect and the size of the sample… even a very small effect can be statistically significant if observed in a very large sample (would have a very small standard error)
Effect Size
-Measure of the absolute magnitude of an effect, independent of sample size
-Hypothesis tests should be accompanied by effect size
Cohen’s d
M-u/o
Cohen’s d effect sizes
Power of a hypothesis test