What is reliability
•Internal
•External
A measure of consistency, how much we can depend on a given measurement/study to get the same results if we repeat it.
•Internal - Extent to which different parts of a measure are consistent with itself. Usually associated with attitude scales such as personality tests (e.g. Participant getting similar results in 1st half of IQ or personality test as second)
•External - the extent to which a measure is consistent when repeated - ability to prouduce the same results every time the test is carried out.
What are the 4 ways we can assess reliability
asessing (internal) reliability of a self report method:
Split-Half Method
Split test into 2 parts and have participants complete both. The participants scores on one half of the test should be correlated their score on the other half. A strong positive correlation, between the 2 parts, indicate (internal reliability).
assesing (external) reliability of an experiment:
Test-retest
A study is repeated atleast twice or more using the same procedure and participants, there is usually a short interval of atleast a week. Test correlation of P scores from 1st and later occasion, Strong positive correlation indicates (external) reliability
Assessing (external) reliability of an Observation:
Inter-Observer/rater reliability
Two or more psychologists independently record and tally behaviour during the same observation, using the same behavioral categories. At the end they then correlate their tally totals with one another. A strong positive correlation between the sets of scores indicate reliability (A correlation correlation efficient of 0.8 or more).
Assesing (external) R of a self report method:
Inter-interview reliability -
In case of interviews a researcher could assess a interviewer by comparing the answers they get with a participant on one occasion to the answers the same interviewer and participant get a week later.
What are the three ways that you improve can improve reliability in a observation
How can we improve the reliability of an experiment
What is Validity
Validity refers to the extent which an observed affect is genuine and measuring what it claims to be/Is is supposed to me measuring. Also includes whether it can be generalise beyond the research setting it was found in.
What is internal validity
•what is considerd when seeing if a study is internally valid.
Internal validity is how accurate a test measured what it does based on the cause and effect relationship between the change the researcher mad eto the IV and the observed change in the DV.
To decide if a a study is valid the following should be considerd:
•Are their Operationalised variables/behaviour categories
Are there any confounding variables?:
•Demand characteristics - Social cues that cause the participants to think they discoverd the aim and change there behaviour.
Social desirability
•Investigator effects - Researchers behaviour or characteristics, consciously or unconsciously influences results & P answers.
•Researcher bias - Biasedly record behaviour being studied in a way that reflects their interpretation rather then real behaviour & results.
•Participant Variables - e.g. P age, gender, personality, intelligence.
○Situational Variables - e.g. Noise, time of day, room temp.
•Experimental design used e.g. Order effects in a repeated measures design (which can be reduced by counter-balancing).
What is external validity
•what types of validity are classed as external
Validity refers to how well the results of a study can be generalise beyond the study. (From the sample used->target population and experimental setup->real world setting snd activities)
Ecological validity: The extent to which research findings can be generalised to situations other then the research setting.
(Mundane realism): Extent to which the task/materials/activities used in experimental set-up are similar to the stimuli in the real world.
Population validity: Whether we can generalise the results from a study sample to the target population, dependant on the extent to which the study sample is representative or the target population.
Ttemporal validity: The extent to which findings can be generalised to other historical periods
What are the ways in which we can assess validity
Face validity: Does the test appear/seem to be assesing what it claims to be assesing.
Concurrent validity (type of criterion validity): The extent to which data from the new test has similar findings to an established test or existing measure. (0.8)
Content validity: Assessed by asking experts in the field to check the methodology of the study to see how accurately it measures the desired behaviour.
How do we improve internal validity, including in a experiment
Single blind trial: Participants dont onow what group or conidton they are in but only the researcher - reduces demand characteristics.
Double blind: Neither the participant nor the experimenter know what each group or condition represents. - Reduces demand characteristics and researcher bias.
Counterbalancing - controls order effects.
Revise/ remove questions if they are judged to have poor face validity
Operationalise variables,
standardise procedures
Control extraneous variables
How do we improve external validity
Ecological validity -
•give participants a task that has high mundane realism; If possible conduct a field/natural experiment.
Population Validity: Use a large sample; a sampling technique that is likely to produce a representative sample e.g. stratified saming.
What is content analysis
A type of observational study/method in which behaviour is observerd indirectly in visual, written or verbal material. This technique turns qualitative data into quantiatative data
The researcher decides a research question and then decides a sample to select in which they will analyse. The researcher then uses coding units such as created themes or behavioural categories and counts the number of examples that fall into each category to form quantitative data. Analysis can then be peformed on the quantitative data to look for patterns and see if it suppourts the hypothesis
What is thematic analysis
-how is thematic analysis carried out
Observational study where qualitative data (in the form of a transcript) is summarised by gaining a deeper meaning of it, and identifying themes in the material to be analaysed.
The researcher collects material in the form of text and reads to become familiar with the transcript.
They re-read the transcript nd code it, such as the annotating any patterns spotted.
They then identify emergent/recurring themes from the annotations.
They make a conclusion based on the transcript themes and relation to previous research
What are 3 strengths of content analysis and thematic analysis
The data used isnt created for research but taken from the real world & based on observations on what people really do, this means the analysis has high Ecological Validity and should be able to be generalised the real word situation.
Quick and easy to gather a sample ad the artefacts come from the real world. -> also means the sources can be retained by others which makes the observation replicable.
There should be no ethical issues because much of the material that an analysst might want to study e.g adverts, newpspapers already exists within the public domain
What are 3 limitations of Content analysis and thematic analysis
Observer bias, the researcher will need to interpret subjective text which often leads to researchers interpreting the gext in a way that suppourts their pre existing views (not for thematic on its own as the theories are made after the themes are found)
The data wouldnt have been created under controlled conditions and therefore lack validity. E.g If they observe a magazine/newspaper on womens preferences in a man their may have been social desirability within the womens answers that arent being taken into or account.
Other e.g historical record such a diary may not contain a accurate record of the past but have innacuracies
What is a case study
A case study is a in-depth, detailed study of a single individual, group of people, institution or event. They are usually: carried out in the real world & idiographic (individually focused)
What are 4 common things for a case study to be conducted on
Psychologically unusual individuals + Unsual behaviour
Unusual events e.g football riot
Organisational behaviour e.g teaching at an outstanding school
Typical individuals within a demographic
What are 3 strengths of a case study
Case studies are in-depth, rich data (normally longitudinal and qualitative) investigations, meaning that it provides valid insights are true reflection of experience.
Case studies are often the only way to investigate unsual, naturally occuring events or human behaviour that cant be generated in a lab for ethical reasons.
Case studies normally study the participant/group in their natural enviroment, high ecomogical validity.
What are 3 limitations of a case study
Information gatherd is often based on self report data. They may be asked recall information up to months/years ago and not be able to - retrospective memory/bias. They may also lie to want make themselves look good to the researcher, showing ‘social desirability’.
Case studies often follow a single individual, this makes it hard to generalise the findings beyond the study, lacks population validity.
As case studies are usually longitudinal, they can suffer from researcher bias as a researcher may get to closely know the individual and loose objectivity in the way the record the data.
What are the 4 features of science
Objectivity and Empiricism
Replicability and Falsifiability
Theory construction & Hypothesis testing
Paradigm and Paradigm shifts
Explain the empirical method method.
The empirical method is the process of collecting information/data from the world through direct observation or experiment rather then subjective beliefs.
What is objectivity
Data that is free from bias/preconceptions & interpretation but based on phenomenon/fact rather then opinion or emotion.
What is replicability
Replicability refers to the extent to which a study can be repeated, using the same measures and procedures to test external reliability.
What is falsifiability
For a theory to be scientific the hypothesis needs to be constructed in way that it can be empirically tested to be proven wrong
What is a Paradigm and and Paradigm shift
A paradigm is a set of established, shared assumptions and methods within a scientific field.
A paradigm shift happens when researchers begin to question the accepted paradigm and build a crutique until the contradictory evidence is to much to ignore and becomes the new consenus