Research Methods Flashcards

(43 cards)

1
Q

What is reliability
•Internal
•External

A

A measure of consistency, how much we can depend on a given measurement/study to get the same results if we repeat it.

•Internal - Extent to which different parts of a measure are consistent with itself. Usually associated with attitude scales such as personality tests (e.g. Participant getting similar results in 1st half of IQ or personality test as second)

•External - the extent to which a measure is consistent when repeated - ability to prouduce the same results every time the test is carried out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the 4 ways we can assess reliability

A

asessing (internal) reliability of a self report method:
Split-Half Method
Split test into 2 parts and have participants complete both. The participants scores on one half of the test should be correlated their score on the other half. A strong positive correlation, between the 2 parts, indicate (internal reliability).

assesing (external) reliability of an experiment:
Test-retest
A study is repeated atleast twice or more using the same procedure and participants, there is usually a short interval of atleast a week. Test correlation of P scores from 1st and later occasion, Strong positive correlation indicates (external) reliability

Assessing (external) reliability of an Observation:
Inter-Observer/rater reliability
Two or more psychologists independently record and tally behaviour during the same observation, using the same behavioral categories. At the end they then correlate their tally totals with one another. A strong positive correlation between the sets of scores indicate reliability (A correlation correlation efficient of 0.8 or more).

Assesing (external) R of a self report method:
Inter-interview reliability -
In case of interviews a researcher could assess a interviewer by comparing the answers they get with a participant on one occasion to the answers the same interviewer and participant get a week later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the three ways that you improve can improve reliability in a observation

A
  1. Standardisation - when more then one Investigator is used in a study the way they collect and record data should be standardised - done through training.
  2. Operationalising the behavioral categories, lowers ambiguity around interpreting an action/ behaviour. E.g one Observer could note something as hitting that another does as touching.
  3. Pilot studies - trial study to identify if any behaviour categories have been poorly defined.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How can we improve the reliability of an experiment

A
  1. Standardisation - Standardised procedure for each participant. E.g same thing said to each using a script.
  2. Pilot studies - trial, test study to discover any problems with the research design such as timings or participants understanding questions.
  3. Taking more then one measure in order the reduce the impact of a anomalous score e.g. measure some1s time to react to a visual stimulus and take an average score.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Validity

A

Validity refers to the extent which an observed affect is genuine and measuring what it claims to be/Is is supposed to me measuring. Also includes whether it can be generalise beyond the research setting it was found in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is internal validity
•what is considerd when seeing if a study is internally valid.

A

Internal validity is how accurate a test measured what it does based on the cause and effect relationship between the change the researcher mad eto the IV and the observed change in the DV.
To decide if a a study is valid the following should be considerd:

•Are their Operationalised variables/behaviour categories

Are there any confounding variables?:
•Demand characteristics - Social cues that cause the participants to think they discoverd the aim and change there behaviour.
Social desirability

•Investigator effects - Researchers behaviour or characteristics, consciously or unconsciously influences results & P answers.

•Researcher bias - Biasedly record behaviour being studied in a way that reflects their interpretation rather then real behaviour & results.

•Participant Variables - e.g. P age, gender, personality, intelligence.
○Situational Variables - e.g. Noise, time of day, room temp.

•Experimental design used e.g. Order effects in a repeated measures design (which can be reduced by counter-balancing).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is external validity
•what types of validity are classed as external

A

Validity refers to how well the results of a study can be generalise beyond the study. (From the sample used->target population and experimental setup->real world setting snd activities)

Ecological validity: The extent to which research findings can be generalised to situations other then the research setting.

(Mundane realism): Extent to which the task/materials/activities used in experimental set-up are similar to the stimuli in the real world.

Population validity: Whether we can generalise the results from a study sample to the target population, dependant on the extent to which the study sample is representative or the target population.

Ttemporal validity: The extent to which findings can be generalised to other historical periods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the ways in which we can assess validity

A

Face validity: Does the test appear/seem to be assesing what it claims to be assesing.

Concurrent validity (type of criterion validity): The extent to which data from the new test has similar findings to an established test or existing measure. (0.8)

Content validity: Assessed by asking experts in the field to check the methodology of the study to see how accurately it measures the desired behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do we improve internal validity, including in a experiment

A

Single blind trial: Participants dont onow what group or conidton they are in but only the researcher - reduces demand characteristics.

Double blind: Neither the participant nor the experimenter know what each group or condition represents. - Reduces demand characteristics and researcher bias.

Counterbalancing - controls order effects.

Revise/ remove questions if they are judged to have poor face validity

Operationalise variables,
standardise procedures
Control extraneous variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do we improve external validity

A

Ecological validity -
•give participants a task that has high mundane realism; If possible conduct a field/natural experiment.

Population Validity: Use a large sample; a sampling technique that is likely to produce a representative sample e.g. stratified saming.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is content analysis

  • how is content analysis peformed
A

A type of observational study/method in which behaviour is observerd indirectly in visual, written or verbal material. This technique turns qualitative data into quantiatative data

The researcher decides a research question and then decides a sample to select in which they will analyse. The researcher then uses coding units such as created themes or behavioural categories and counts the number of examples that fall into each category to form quantitative data. Analysis can then be peformed on the quantitative data to look for patterns and see if it suppourts the hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is thematic analysis

-how is thematic analysis carried out

A

Observational study where qualitative data (in the form of a transcript) is summarised by gaining a deeper meaning of it, and identifying themes in the material to be analaysed.

The researcher collects material in the form of text and reads to become familiar with the transcript.
They re-read the transcript nd code it, such as the annotating any patterns spotted.
They then identify emergent/recurring themes from the annotations.
They make a conclusion based on the transcript themes and relation to previous research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are 3 strengths of content analysis and thematic analysis

A

The data used isnt created for research but taken from the real world & based on observations on what people really do, this means the analysis has high Ecological Validity and should be able to be generalised the real word situation.

Quick and easy to gather a sample ad the artefacts come from the real world. -> also means the sources can be retained by others which makes the observation replicable.

There should be no ethical issues because much of the material that an analysst might want to study e.g adverts, newpspapers already exists within the public domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are 3 limitations of Content analysis and thematic analysis

A

Observer bias, the researcher will need to interpret subjective text which often leads to researchers interpreting the gext in a way that suppourts their pre existing views (not for thematic on its own as the theories are made after the themes are found)

The data wouldnt have been created under controlled conditions and therefore lack validity. E.g If they observe a magazine/newspaper on womens preferences in a man their may have been social desirability within the womens answers that arent being taken into or account.
Other e.g historical record such a diary may not contain a accurate record of the past but have innacuracies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a case study

A

A case study is a in-depth, detailed study of a single individual, group of people, institution or event. They are usually: carried out in the real world & idiographic (individually focused)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are 4 common things for a case study to be conducted on

A

Psychologically unusual individuals + Unsual behaviour

Unusual events e.g football riot

Organisational behaviour e.g teaching at an outstanding school

Typical individuals within a demographic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are 3 strengths of a case study

A

Case studies are in-depth, rich data (normally longitudinal and qualitative) investigations, meaning that it provides valid insights are true reflection of experience.

Case studies are often the only way to investigate unsual, naturally occuring events or human behaviour that cant be generated in a lab for ethical reasons.

Case studies normally study the participant/group in their natural enviroment, high ecomogical validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are 3 limitations of a case study

A

Information gatherd is often based on self report data. They may be asked recall information up to months/years ago and not be able to - retrospective memory/bias. They may also lie to want make themselves look good to the researcher, showing ‘social desirability’.

Case studies often follow a single individual, this makes it hard to generalise the findings beyond the study, lacks population validity.

As case studies are usually longitudinal, they can suffer from researcher bias as a researcher may get to closely know the individual and loose objectivity in the way the record the data.

19
Q

What are the 4 features of science

A

Objectivity and Empiricism
Replicability and Falsifiability
Theory construction & Hypothesis testing
Paradigm and Paradigm shifts

20
Q

Explain the empirical method method.

A

The empirical method is the process of collecting information/data from the world through direct observation or experiment rather then subjective beliefs.

21
Q

What is objectivity

A

Data that is free from bias/preconceptions & interpretation but based on phenomenon/fact rather then opinion or emotion.

22
Q

What is replicability

A

Replicability refers to the extent to which a study can be repeated, using the same measures and procedures to test external reliability.

23
Q

What is falsifiability

A

For a theory to be scientific the hypothesis needs to be constructed in way that it can be empirically tested to be proven wrong

24
Q

What is a Paradigm and and Paradigm shift

A

A paradigm is a set of established, shared assumptions and methods within a scientific field.

A paradigm shift happens when researchers begin to question the accepted paradigm and build a crutique until the contradictory evidence is to much to ignore and becomes the new consenus

25
Describe theory construction
A theory is a collection of general principles that explain observations and facts. Such theories can help us understand and predict the natural phenomena around us. Sometimes a theory comes before a hypothesis (deductive method) And sometimes after (indeductive)
26
Hypothesis testing
Theories are modified through the process of hypothesis testing. A theory must be able to generate testable expectations which are stated in the form of a hypothesis. If a scientist fails to find suppourt for a hypothesis then the theory requires modification
27
For science to progress, research and theories must be scientific. And in order to test theories and make scientific progress the 4 points are required:
•The theory behind the study must be rigid •Hypotheses must be operationalised (also falsifiable) • The methods must be replicable (and therefore standardised) • It must be possible possible to test a theory empirically and objectively
28
What is significance
Significance is a statistical term that tells us how sure we are that a real difference or correlation exists. A 'significant' result means that the researcher can reject the null hypothesis. The generally accepted level of significance is 5%
29
What is a type 1 error
A type 1 error is when a researcher wrongly accepts the alternative hypothesis and mistakens a difference in the data as being significant . The researcher also rejects the null hypothesis when it is true. We are more likely to make a type 1 error if the significance level is too leniant e.g 0.1/10% rather then 0.05
30
What is a type 2 error
A type 2 error is a false negative, where a researcher wrongly rejects the alternative hypothesis and overlooks a real difference in the data as not significant (normally a result of them being to stringent), therefore wrongly accepts the null hypothesis in error
31
Why do psychologists normally use the 5% level of significance in their research.
Because perfection is not necessary for investigations so when using 0.05 level of significance there is a 5% probability or less that results are down to chance & 95% due to IV. 0.05 level Also creates a balance for researchers that isnt to stringent or lentiant, therefore reduces the chance of both a type 1 & 2 error
32
What is nominal data
Nominal data is often referred to as categorical data, where a frequency count of a particular variable is recorded. The variables are discrete and have no natural order. E.g career choices, fav animal
33
What is ordinal data
Data that is (or can be) ranked in order, but does not have precise values/proper units to measure the score choices on a likert scale (e.g.how happy are u from 1-10) Each P has a score but we dont know everyone who voted 3 feels the same attraction.
34
What is interval
Interval data refers to data where each participant's scores have been measured using a scale with recognised equal units. E.g. Degrees, seconds, IQ score tests due to standardised scale.
35
Strength and limitation of each level of measurement
Nominal (+) easy to generate from closed auestions and can generate alot of data reasonably quick. (-) Participants are unable to express different degrees to a response. We cant talk ab the differences between each category because their distinct. Ordinal (+) Participants are able to express different degrees to a response, therefore data is more sensitive in comparison to nominal data. (-) Data lacks precision because its based on subjective opinion rather then objective measures. Interval (+) more precised than nominal or ordinal data because it is based on numerical scales with recognised equal units
36
What are the parametric tests of difference + why do researchers prefer to use them where possible. What is the criteria data/ a sample nust meet in order for a parametric test of difference to be used.
The parametrical tests of differences is the unrelated and related t test. Researchers prefer to use these over the manwhitney and wilcoxin because they are more powerful when looking at differences between 2 samples of data because they use real data values (means and sd). For parametric test to be used, the parametric criteria that the samples must meet are: • The measure is interval •The data is drawn from a population that has a normal distribution (most people score around the average/mean) • Both samples have equal variances (the set of scores in each condition having a similiar dispersion or spread)
37
What differs a test of correlation then a test of difference. - What are the tests of correlation + what are the tests of difference
A test of correction is used to see whether the association (rather then a difference) between two co-variables is significant or not. Chi² can be a test of difference or correlation. Correlation: spearmans rho, pearsons r Differences: manwhitney, wilcoxon, related t test, unrelated t test
38
What are the 3 things we do when asked to justify a statistical test
•State whether we are testing association (between 2 variables) or a test of difference (between IV and DV), provide evidence. •Name the design ( IG,MP,RM) and explain why it is that design providing evidence from scenario. •Name the level of measurement being used and justify what it is by providing evidence from the scenario
39
What is the order of a report on investigations and their functions:
Title - mirror the aim of the study and begin with 'an investigation into...' Abstract - 150-200 word summary of the report Introduction Method Results Dicussion References
40
What is the function of a: Title and Abstract in a report on a investigation
Title - To tell the reader what the report is about. The title should mirror the study and begin with 'An investigation into...' Abstract: 150-200 words to provide the reader with a brief summary of the whole report. An abtract should consists of approx 1 sentence outlining the: aim, hypothesis, sample, procedure, results, conclusion.
41
What is the function of a Introduction in a report on a investigation
The function of an introduction in a report on a investigation is To introduce the background and rationale of the study. It provides background info on theories & studies relevant to the investigation and why the current research is being conducted The introduction will explain the aim and the hypothesis is presented.
42
How Is the strength of a correlation determined
The strength of a correlation is determined by how close the correlation coefficient is to 1 or -1. A coefficient near 1/-1 indicates a strong relationship, while a coefficient closer to 0 indicates a weak relationship.
43
Why may researchers check the significance at both the 0.05 and 0.01 level
After obtaining significant results at the 0.05 level, checking at the 0.01 level can help ensure that a Type I error is unlikely; particularly in high-stakes, research with contradicting evidence or controversial research