MIDTERM Flashcards

(63 cards)

1
Q

What is Empiricism?

A

Knowledge comes from systematic observation and measurement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is rationalism?

A

Knowledge comes from logic and reasoning, we need theory, not just raw data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is falsifability?

A

Theories must be testable and open to being proven wrong. If a theory can’t be falsified, it’s not scientific.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Physics model of science assumptions:

A

A real world exists independent of perception.

Events in the world are deterministic (caused, not random).

Humans can know this world through empiricism + rationalism.

Humans are natural phenomena subject to determinism.

Research should build universal theories/laws (like physics does).

BUT → theories are always tentative, not absolute truth.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How did the physics model shape modern psychological science?

A

By pushing it toward objectivity, universal laws, and quantitative methods — aiming to be “like the physical sciences.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are philosophical tensions that challenge the scientific worldview?

A

Free will vs. determinism → If the world is deterministic, what about human choice?

Relativism vs. universalism → Is there one true reality, or multiple valid ones?

Values in science → Should research be totally objective and value-free, or do researchers’ perspectives inevitably shape knowledge?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are two alternative approaches to knowing?

A

Social construtionism & Indigenous psychologies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does social constructionism offer different ways of knowing?

A

Reality is co-constructed through social interaction.

Multiple valid realities, no single objective truth.

Researcher subjectivity is an asset, not a flaw.

Methods: qualitative (interviews, focus groups, text analysis).

Emphasises culture and context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does Indigenous psychologies offer different ways of knowing?

A

Psychology built from, by, and for local cultures.

In Aotearoa NZ → Kaupapa Māori research.

Challenges the assumption that Western psychology applies universally.

Especially important in social, cultural, and clinical areas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do scientific methods reflect and consider specific cultures and values?

A

Even “objective” scientific methods are shaped by cultural values.

Western psychology reflects Enlightenment/European traditions of empiricism, rationalism, and universalism.

In Aotearoa, bicultural research means recognising Mātauranga Māori (Māori knowledge systems) as valid and integrating cultural values into research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is research integrity?

A

Principles and practices that protect trust in science — combining practical safeguards (e.g., transparency, preregistration) and philosophical awareness (e.g., cultural context, assumptions).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How is research integrity understood in Aotearoa/NZ?

A

Through tikanga and cultural humility: due diligence, clear communication, and mutually beneficial outcomes for communities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is cultural humility?

A

Recognising cultural limitations and biases, acknowledging power imbalances, centring community voices, and being willing to be corrected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why is cultural humility important in research?

A

It builds trust, supports cultural responsiveness, and strengthens scientific integrity by ensuring findings are valid and meaningful for communities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are “researcher degrees of freedom”?

A

Flexible choices researchers make in data collection/analysis (e.g., when to stop collecting, how to treat outliers, which variables to analyse).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can researcher degrees of freedom bias research?

A

They can inflate false positives, erode trust in findings, and undermine scientific norms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the “garden of forking paths”?

A

The hidden flexibility in analysis: researchers may choose different tests depending on the data, increasing the chance of spurious results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are key tikanga-based considerations in Māori research ethics?

A

Due diligence, clear communication, reciprocity, respecting Māori protocols, and ensuring outcomes benefit Māori communities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How does cultural responsiveness connect to good science?

A

Transparency, accountability, and culturally appropriate methods enhance both community trust and the rigour of findings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is Universalism in science?

A

Science judges the content of ideas, not the status of the person proposing them; credit the idea, don’t attack the person.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is Communism (in Merton’s sense)?

A

Knowledge belongs to everyone; research should be shared widely and communicated to those it matters to.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is Disinterestedness?

A

Researchers accept they aren’t bias-free, but aim for objectivity and guard against confirmation bias. Research is not for researchers own benefit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is Organised Skepticism?

A

All knowledge should be open to scrutiny; no idea is too sacred to be publicly questioned.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the core equation of Classical Test Theory (CTT)?

A

X = T + E, where X = observed score, T = true score, and E = error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
In CTT, what is reliability?
Reliability is about minimizing error (E) so X is as close to T as possible.
26
In CTT, what is validity?
Validity means the observed score (X) reflects the intended latent construct (T), not just consistency.
27
Why must reliability and validity be earned in each new context?
Because scales don’t “own” psychometric properties — measurement is context-bound and fallible.
28
What is a scale?
A set of related items that are caused by a latent variable (e.g., depression → sadness, fatigue, low mood).
29
What is an index?
A composite of different indicators that together form a construct (e.g., student engagement from attendance, assignments, library use).
30
How are scales and indexes different?
In scales, the latent variable causes the items. In indexes, the indicators define the construct.
31
What do path diagrams show?
How latent variables (unobserved constructs) influence item responses (observed variables), with arrows showing causality.
32
In a path diagram for depression, what do arrows represent?
Depression (latent variable) → responses to items (X₁–X₄), each with its own error term (e₁–e₄).
33
What are latent variables?
Unobservable constructs (e.g., self-esteem, anxiety) that are inferred from multiple observed indicators.
34
Why use multi-item measurement?
To capture broad constructs, reduce random error, and get more stable measurement.
35
What does stronger correlation between items suggest?
That they share a common cause (the latent variable).
36
What does Cronbach’s Alpha (α) measure?
Internal consistency (average inter-item correlations).
37
What threshold for Cronbach’s Alpha is usually acceptable?
≥ 0.70 (acceptable), > 0.90 may indicate redundancy.
38
What is McDonald’s Omega (ω)?
A reliability coefficient more robust than alpha when factor loadings vary.
39
What is KR-20?
A version of Cronbach’s alpha for dichotomous items (e.g., yes/no, correct/incorrect).
40
What are limitations of α, ω, and KR-20?
They don’t guarantee validity, can’t detect systematic error, and must be interpreted in context.
41
What is the Goldilocks Principle for reliability?
Too low = poor reflection of construct; Too high = redundancy; Just right = depends on research purpose
42
What is probability sampling?
A method where every population member has a known, equal chance of selection; supports generalisation (external validity).
43
What is nonprobability sampling?
A method where some members are more likely than others to be selected; cannot guarantee generalisation but practical and common.
44
When is probability sampling most appropriate?
When the research goal is to generalise findings to a defined population.
45
When is nonprobability sampling acceptable?
When generalisation is not the main goal (e.g., theory testing, piloting instruments, exploring new phenomena).
46
What is external validity?
The extent to which results can generalise from the sample to the population.
47
Why does sampling method matter for external validity?
Probability sampling increases generalisability; nonprobability samples risk bias and limit generalisation.
48
What is the bias relevance framework?
A way to judge if sample bias matters: if the bias affects the construct being studied → results are compromised; if not → results may still be valid.
49
Give an example of the bias relevance framework.
Studying procrastination with an online sample may bias results, since online students might procrastinate differently than the general population.
50
Does a larger sample size guarantee representativeness?
No. Representativeness depends on selection method, not size.
51
What does sample size actually affect?
Precision of estimates (margin of error), not representativeness.
52
What is a raw effect size?
An effect measured in original units (e.g., “Therapy reduced depression by 5 BDI points”); meaningful in context.
53
What is a standardised effect size?
An effect expressed in scale-free units (e.g., Cohen’s d, r, η², β); allows comparison across studies and populations.
54
When should you use raw effect sizes?
When focusing on practical meaning in applied/clinical contexts.
55
When should you use standardised effect sizes?
When comparing across studies, building theory, or conducting meta-analyses.
56
What is the main question addressed by effect sizes?
“How much?”
57
What is the main question addressed by confidence intervals (CIs)?
“How precise?”
58
What are four valid ways to interpret a CI?
Dance of the CIs (95% of intervals capture the true value across replications). Plausibility picture (values near mean are more plausible). Margin of error (estimate ± MoE). Replication prediction (future estimates likely fall in this range).
59
What are common misinterpretations of a 95% CI?
Thinking it means there’s a 95% chance the true value is inside (false), or that it covers 95% of the data (false).
60
What determines CI width?
Standard error (SD/√n), sample size (larger n = narrower CI), variability (larger SD = wider CI), and confidence level (higher confidence = wider CI).
61
How do you calculate a simple 95% CI?
Estimate ± (critical value × SE); for large n, critical value ≈ 2.
62
What is the relationship between sample size and CI width?
Bigger samples → smaller SE → narrower CI → more precise estimates.
63
Why do we need both effect sizes and confidence intervals?
Effect sizes tell us how big the effect is; CIs tell us how precise that estimate is.