The proposed effect, outcome variable, measure not manipulated in exp
Probability of null hypothesis being true (referred as p-value) and computing statistic and how likely statistic has that value by chance along
A significant result means effect is important, a non-sig result means null hypothesis is true and significant result means null hypothesis is false
Effect size is quantiative measure of magnitude of experimental effect, larger effect size the stronger relationship between 2 variables and can be used to compare studies on basis on effect size
Mean 1 minus mean 2 divided by standard devation
Normal distribution which can be described by mean (central tendency) and SD (dispersion)
Mean can be misleading measure of central tendency in skewed distributions as it can be greatly influenced by extreme scores
Median is unaffected with extreme scores and used with ordinal, interval and raio data
Mode only used with nominal data and greatly subject to sample fluctuations and many distributions have more than one mode
Mean is greater than median which is greater than mode (left skewed)
The mode is greater than median which is greater than mean (right skewed)
Sample size, and if you have a massive sample size have normality tests significant even when data is visually normally distributed (usually trust visual plot)
Lack of symmetry (skew) and pointiness (kurotosis)
It tells you how much of our data lies around tails of histogram and helps us to identify when outliers may be present in the data.
Parametric tests assume specific distributions, like the normal distribution, and require adherence to certain statistical assumptions, such as homogeneity of variances and independence of observations.
They tend to be more powerful when these assumptions are met, making them suitable for analyzing data that closely aligns with their requirements, such as interval or ratio data.
On the other hand, non-parametric tests make fewer distributional assumptions, making them robust and applicable to a wider range of data types, including ordinal or skewed data.
While non-parametric tests are generally less powerful than their parametric counterparts when assumptions are met, they provide reliable results in situations where assumptions are violated or when dealing with non-normally distributed data. These differences in assumptions and robustness make each type of test valuable in different research contexts
Spearman’s Rho or Kendall Tau
Its good … but is below -1 then negatively skewed and above 1 then positively skewed
Normal distribution
All good but..if less than -2 then platykurtic and above 2 then leptokurtic
RCTs to even out confounding variable between groups
Average squared deviation of each number from its mean (SD squared)
Square root variance
States sampling distribution of mean approaches normal distribution as sample sixe sizes and especially case for sample sizes over 30
False positive so think there is a sig effect but there isn’t = alpha
False negative so much variance unaccounted for by the model so thinking there is sig effect but there is not = beta