B RM- reliability and validity Flashcards

(57 cards)

1
Q

internal reliability

A

the consistency of the measurement
-is the test consistent within itself?
-clear cause and effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

external reliability

A

the consistency of the results and the extent to which a measure varied from one use to another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

how can we assess internal reliability?

A

Split-half method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is the split-half method?

A

the internal consistency of a test such as psychometric tests and questionnaires. it measures the extent to which all parts of the test contribute equally to what is being measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

explain the four steps of split-half reliability method

A
  1. split a test into two halves (even vs odd numbers)
  2. administer each half to the same individual
  3. repeat for a large group of individuals
  4. find the correlation between the scores of both halves
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what sort of correlation would you expect to see if the questionnaire had high internal reliability using split half method?

A

the higher the correlation between the two halves, the higher the internal consistency of the test or survey- strong positive (+0.8)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

how can we assess external reliability? (two ways)

A

test-retest and inter-rater

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is the test-retest method?

A

this assesses the external consistency of a test, measuring the stability of a test over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

why might test-retest be useful for clinical psychologists when diagnosing mental illness?

A

to ensure diagnoses for mental illness are consistent over time- symptoms remain the same and they are stable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

why might the timing of a retest be important?

A

if too little time has elapsed- participants may remember the questions
if too ling an interval, participant variables may have changed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what sort of correlation would indicate consistency between two sets of results in test-retest?

A

strong positive correlation indicates strong reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is inter-rater/ inter-observer?

A

the degree to which different raters give consistent estimates of the same behaviour. when a single event is measured simultaneously and independently by two or more trained individuals
-if the data is similar it has external reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

how can inter-rater method be made objective?

A

behavioural categories can be operationalised to make it easier to identify when a specific behaviour occurs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

when may inter-rater reliability testing be important?

A

-observations (using behavioural checklist)
-content or thematic analysis for categorising qualitative data
-interviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

how will you know whether or not you need to improve the reliability of a test?

A

once reliability has been assessed, and agreement is not found in terms of internal and external reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

how can researchers improve the reliability of a PROCEDURE?

A

-use standardised instructions and scripts, perhaps use of recording so all ppts hear the same message, easy to replicate
-environment has a high level of control over EVs (LAB) to reduce artificial envs triggering demand characteristics
-ensure the study follows ethical guidelines so it can be replicated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

how can psychologists improve the reliability of OBSERVATIONAL RESEARCH?

A

-all observers should be trained
-behavioural categories should be clearly operationalised
-before the study, each category should be agreed on and a pilot study can be done to practice
-use cameras and recorders to help test the consistency of findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

how can psychologists improve the reliability of QUESTIONNAIRES?

A

-questions should be of equal weights, lengths and difficulty
-closed questions are more reliable- less open to interpretation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

how can psychologists improve the reliability of INTERVIEWS?

A

-use structured interview - standardised questions
-record interviews
-ensure the interviewer is trained to reduce bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

how can reliability of content analysis be improved?

A

operationalise coding units, agree and practice them before

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

difference between internal and external validity

A

internal questions the cause and effect relationship between the change of IV and change on DV
external questions if a study’s findings can be generalised beyond the study- to other situations, people, settings and measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

which type of validity links with causality?

A

internal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

which validity does mundane realism link with and why?

A

external validity-how close to real life or naturalistic the task set is

24
Q

what factors can impacts the internal validity of a research study?

A

-participant variables
-investigator effects
-demand characteristics
-extraneous variables

25
three types of external validity
ecological, population and temporal validity
26
how can we assess internal validity? (two ways)
face validity and concurrent validity
27
what is face validity?
whether a test appears to measure what it's supposed to measure- whether a measure seems relevant and appropriate for what it's assessing
28
who carries out the process of face validity?
an expert in the field who views the test or procedure and makes a judgement as to whether it seems appropriate
29
what is concurrent validity?
a type of criterion validity- looking at how a test relates to other measures of the same concept- a WELL ESTABLIISHED TEST compare with
30
how is concurrent validity demonstrated?
when a test correlates well with a measure that has previously been validated
31
what correlation coefficient would be expect to see if there is concurrent validity?
+0.8
32
how might a researcher assess whether a questionnaire they have produced is a suitable measure of the Authoritarian personality?
compare the new test to a well-established test that measures this personality- Adorno's T-test questionnaire
33
how can ecological validity be improved in research?
ensure your environment reflects real world settings and tasks
34
how can task validity (mundane realism) be improved in research?
ensure your task reflects real world activities that participants would normally carry out on a regular basis
35
how can population validity be improved in research?
ensure your sample reflects the wider population, and is representative -stratified is the best sample method to use
36
how can temporal validity be improved in research?
repeat research across different time periods to see if findings remain consistent.
37
what factors affect internal validity?
1. investigators 2. participants 3. confounding variables
38
examples of investigator effects
-leading questions -bias allocation of ppts into groups -measurement criteria when assessing concepts -lack of control over procedure -interviewer dynamics -bias in interpreting behaviour
39
how can the effects of leading questions be reduced?
ensure questions are not suggestive, and use standardised wording for all participants
40
how can the effects of bias in allocation of ppts to groups be reduced?
use random allocation methods e.g. random number generator or pull names out of hat
41
how can the effects of measurement criteria when assessing concepts be reduced?
create clear, operationalised definitions of the behaviours or concepts you are measuring
42
how can the effects of lack of control over procedure e.g. timings be reduced?
use standardised procedures and protocol for every group and participant -use the same setting, timing and materials for all groups
43
how can the effects of interviewer dynamics be reduced?
-ensure all interviewers undergo the same training -use structured interviews with fixed questions
44
how can the effects of bias in interpreting behaviour be reduced?
use a double blind procedure- this means the person conducting the study does not know the aim of the research and neither does the ppt
45
what are participant effects?
-reduce internal validity where a participant picks up clues from the study and thus changes their behaviour
46
what are the three types of participant effects?
-Hawthorne effect -Social desirability -Demand characteristics
47
what is the Hawthorne effect?
when ppt change behaviour when they know they are being watched
48
what is social desirability?
ppt change behaviour to act in a way they believe is desirable and good
49
what are demand characteristics?
behaving in a way you think the researcher wants- to either help them or not help them
50
what type of observation would reduce the chance of Hawthorne effect occuring?
covert - not aware they are being observed participant- researcher is involved in observation
51
how would anonymity reduce the likelihood of social desirability occurring?
participants may be more open and honest- don't need to behave in a desirable way
52
what are participant variables?
the differing individual characteristics of participants in an experiment, sometimes referred to as individual differences -type of extraneous variable that can impact the internal validity of research
53
what are some common kinds or participant variables?
-IQ -aggression levels -age
54
what experimental design would reduce PV?
repeated measures- same ppts complete each condition of the study
55
what are confounding variables?
what an EV becomes when it does have an impact on the DV
56
examples of confounding variables
-situational variables -participant variables -investigator effects
57
what research methods best reduces the chances of situational variables?
laboratory experiment