Midterm Flashcards

(63 cards)

1
Q

Empiricism

A

Your knowledge of the world is based on your sensory experiences. That can be studied through systematic observation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Rationalism

A

the view that regards reason as the chief source and test of knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

falsifiability

A

the inherent possibility that a scientific theory or hypothesis can be proven wrong through observation or experimentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The main assumptions of the physics model and how it shaped modern psychological science.

A

Sometimes known as positivism, 8 defining principles. – overall – the world exists independent of my senses and it will exist when I am not here - a REALIST ONTOLOGY.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Major psychological tensions

A

Free will (we are governed by forces out of our control), Relativism (people basic psychological processes vary over time and space),

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A scientific attitude is one where…

A

…research is carried out systematically, skeptically and ethically.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Rationalism

A

knowledge through logic and reasoning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Social Constructionism

A

Individual focused, that social properties are created through interactions between people rather than having a separate existence. Qualitative research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Indigenous Psychology

A

a cultural challenge, values the individual, psychology by, from and for the local culture, e.g Kaupapa Maori research, Te Whara Tapa Wha.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Research integrity should reflect…

A

…honesty (in all aspects of research), transparency (open communication and disclosure), accountability (individual and collective responsibility), care and respect (for everyone and everything involved), rigor (appropriate methods and standards)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Researcher degrees of freedom?

A

Collecting and analyzing data until it is statistically significant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

False positive psychology

A

Undisclosed flexibility in data collection and analysis allows presenting anything as statistically significant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why is research integrity so important in psychological research?

A

You are more likely to find a false positive (type 1) than a false negative (type 2)
Undermines research trust, also rarely gets published and deters researchers away from conducting or attempting research, waste resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Mataraunga Maori, Tikanga

A

The unique way of viewing the world- encompassing both traditional knowledge and culture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Mataraunga Maori concepts

A

Collective possession, collective benefits, treasure, intergenerational connections, guardianship, spiritual, tikanga, verbal assurances,
- Mana of the Matauranga – tradition of oral culture – not picked up and repeated by pen and paper.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Te Whare Tapa Wha

A

The holistic approach to health and health care for Maori, psychological, mental/emotional, physical, spiritual.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Where should the end of your ethical research lead?

A

Thinking about the use to Maori

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Psychometrics

A

The science of developing and evaluating reliable and valid measures of unobservable variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Classical Test Theory (CTT) purpose

A

Proving a theoretical foundation for reliability and validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Classical Test Theory (CTT) formula

A

X (observed score)= T(true score, the ideal error free reflection of what we want to find)+E (error, the combination of influences that distort our measures)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Random error

A

Fluctuations due to chance – threatens reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Systematic Error

A

Consistent bias in one direction – threatens validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Reliability types

A

Test, re-test, inter-rater, internal reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Test-retest

A

The consistency of a measure over time, administering the same test to same individuals at different times would produce a high correlation in results – CTT logic – scores should remain consistent over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Inter-rater
The consistency of measurements when different observers or raters are involved – CTT logic – different raters should reach similar conclusions if observing the same true score.
26
Internal reliability
The consistency of items in a test. Does your measure actually measure what you want it to (reliability coefficients, Cronbach's a)
27
Convergent Validity
Whether measures of the same construct correlate as expected.
28
Discriminant validity
Whether measures are distinct and not related to other things -we want to make sure we are measuring what we indeed want to measure.
29
Latent Variables
Constructs that are hidden or inferred therefore we can make construct validity claims.
30
Basic Elements of path diagrams
Latent variable (unobservable variable, construct (T) > observed variable (scores on each indicator (aka items, X) > arrows show the direction of influence from cause to effect > e1, e2 (error terms – represent everything else affecting each indicator).
31
Scales
Items share a common cause, designed to capture the intensity or degree of a single underlying concept by assigning different weights to items based on perceived importance, difficulty.
32
Indexes
Items determine an outcome combine multiple indicators, measuring different aspects of a broader concept, assigning equal weights to each indicator.
33
A higher alpha indicates...
...closer to 1 suggests greater reliability – meaning the items are measuring the same concepts consistently >.80 general cut off.
34
Tau Variance
The variance of true effect sizes across studies.
35
mcdonalds w, omega
a sophisticated alternative to alpha based on factor analysis principals, does not assume ‘tau variance’
36
KR-20
a measure of internal consistency reliability on a test, specifically when the items are scored dichotomously (true/false)
37
Probability Sampling
Every population member has known, equal chance of selection. - allows statistical inference to a known population – gold standard for external validity. Increased generalisability.
38
Types of probability sampling
Simple random sampling – significantly large n – sample will be unbiased – will not perfectly replicate the population but no systematic error or bias. Stratified sampling – dividing pop into meaningful subgroups, and a random sample is taken from each group. - Chance of being selected depends partly on which stratum a person is in Quota – identify subgroups but put a quota on how many from each group you are using.
39
Non-probability sampling
Same population members more likely to be selected, cannot statistically generalize to a known population, more common. Weaken external validity.
40
Types of non-probability sampling
Same population members more likely to be selected, cannot statistically generalize to a known population, more common.
41
The external validity of frequency claims...
...depends on how representative the sample is of the target population.
42
When are non-probability samples acceptable.
When the claim does not require precise population estimates but rather tests generalizable relationships or causal mechanisms.
43
Sample Bias
Systematic differences between your sample and the population you want your results to be relevant too.
44
Why does sample size not determine representativeness?
Because it is the way you select participants, the sample size only determines the precision of your estimates. e.g 1000 randomly selected is more generalizable than a 10,000-convenience sample.
45
MoE Margin of Error
A statistical estimate of the expected difference between a sample result and the true pop value. Better with probability samples.
46
MoE and external validity
How generalizable the findings are due to the population studied.
47
Cohen's D
The degree to which the phenomenon is present in the population.
48
Morling
The strength of the relationship between two or more variables – mean differences scores would be the relationship between the two means.
49
Cummings and Calin-Jageman
The amount of anything that’s of research interest.
50
What can effect sizes be understood as?
Point estimates, in that we are talking about a statistic of some type – all of our statistics are making estimates about an effect size – a single point from our data.
51
Raw Effect sizes
Pulling from the specific data units to establish effect size, - Measurement score, Mean difference, Variance, Under standardized regression coefficient e.g how much y increases for x.
52
Pros for raw effect sizes
Directly interpretable, good for well-known scales and measures, common for clinical decisions and policy recommendations.
53
Standardised effect sizes
Standardised across research using specific statistical tests to report them. Scale independent, common in research for testable theories, meta analysis, e.g z scores, correlation coefficients, cohen's d.
54
Confidence intervals
The 'how precise' our estimate is - if we were to repeat the sample many times, about 95% of the resulting confidence intervals are going to contain the true population parameter.
55
How large is the MoE?
About two standard errors wide.
56
What are some assumptions of confidence intervals?
We don't expect data to be much further than our confidence interval. The guess of the CI is that the population parameter is closer to the point, the ends are the most extreme value, values closer to the point estimate are more plausible estimates of the true population mean. The population parameter may not always be in that confidence interval.
57
What does the MoE tell us?
The maximum plausible distance between our estimate and the population parameter.
58
Replication Predictions
There is an 83% chance that someone replicating the study would get a point estimate somewhere in our CI.
59
Typical misinterpretations of CI's.
The confidence interval contains 95% of the sample scores – because sample scores fall within two standard deviations of the mean, 95% probability that population parameter is in the CI, 95% sure the effect is real, We can expect 95% of future sample means to fall in the CI.
60
What would a larger sample size indicate for our standard error and CI?
Smaller standard error, narrower confidence interval.
61
How can we alter variability?
Reducing this will reduce our CI (take out the outliers, target a smaller sampling frame. BUT, means you're sacrificing external validity, reliable measurement - but we can capture the t better.
62
How is the width of the CI governed?
Two MoE
63
MoE equation
MoE= SD (variability of data) x square root of n (sample size) divided by 1 x 2 - 95% critical value (corresponding to the desired confidence level). This can therefore make the confidence interval wider or narrower.