Quantitative designs can be used for four key purposes:
The researcher can choose from four main types of designs:
(1) describing a phenomenon in detail, (2) explaining relationships and/or differences among variables, (3) predicting relationships and/or differences among variables, and (4) examining causality.
Descriptive, correlational, quasi- experimental, and experimental.
These types of designs are broadly categorized as either experimental (to determine causality) or nonexperimental (to describe, examine, predict).
Differences between experimental and nonexperimental quantitative designs
Experimental: The researcher plays an active role by manipulating the independent variable. The IV is the intervention, or “treatment,” that the researcher wants to test in a specific group of people in order to determine the effect that the IV has on the outcome of interest, known as the dependent variable (DV).
- The two main types of experimental design are true and quasi-experimental.
- Quasi-experimental designs are similar to in that they also involve manipulation of the IV, but they lack either randomization or a control group.
Nonexperimental: Lack of researcher manipulation. The researcher “observes” how the variables of interest occur naturally, without the researcher trying to change how the conditions normally exist.
Retrospective designs
What are they used for?
Cross-sectional Designs
Cohort comparison designs
Repeated Measures Design
Longitudinal designs
Prospective desins
Panel Design
Trend study
Follow-up study
Crossover designs
Causality
The relationship between a cause and its effect.
The cause variable has the ability or power to produce a specific effect or outcome.
Cause variable: IV, Effect: DV
Probability
Control and manipulation
Confounding and extraneous variables
Bias
Systematic error in selection of participants, measurement of variables, and/or analysis of data that distorts the true relationship between IV and DV.
Randomization
Random sampling
Random assignment
Between-groups and within-groups designs
Study validity
Ability to accept results as logical, reasonable, and justifiable based on the evidence presented.
- internal and external
Internal validity
The degree to which one can conclude that the independent variable produced changes in the dependent variable.
Statistical conclusion validity
The degree that the results of the statistical analysis reflect the true relationship between the independent and dependent variables.
Construct validity
A threat to validity when the instruments used do not accurately measure the theoretical concepts.
External validity
The degree to which the results of the study can be generalized to other participants, settings, and times.
Threats to statistical conclusion validity
Low statistical power
- Low power is often due to a small sample size, which often happens in nursing research.
- A larger sample size increases the likelihood that a statistical test will be able to detect a small difference or relationship, reject the null hypothesis, and allow the researcher to accept that a true relationship or difference does exist.
Low reliability of the measures
- Instruments that are not reliable interfere with researchers’ abilities to draw accurate conclusions about relationships between the IV and DV.
- Assess if self-report instruments achieved an internal consistency reliability of .70 or higher; test–retest reliability of .80 or higher; and if more than one data collector was used, interrater reliability of at least .90 or higher.
Lack of reliability of treatment implementation
- This can occur if different researchers or their assistants have implemented the treatment (IV) differently to different participants or if the same researcher is inconsistent in implementing the treatment from one time to another