The Aim
Hypotheses
Experimental Design - Repeated Measures
Experimental Design - Independent Group Design
Experimental design - Matched Pairs Design
Pilot studies
Objectivity and the Scientific method
Replicability
Replication is a key feature of the scientific process as it enables the researcher to look at different situations and participants to determine if the basic findings of the original study can be generalised to other participants and circumstances. This is done through repeating an investigation under the same carefully controlled conditions.
- Popper - Repeating research to check the validity of results. Methodology must be clear and detailed for repetition under the same conditions.
Falsifiability
Researchers must be able to evaluate evidence in a way that also includes the possibility that a particular theory may be proven false as well as correct. It does not mean that something is false, it merely means that if the claim was false then they must be able to prove that it is false.
- Popper - Theory must be empirically testable to check if it is true for all. However, this is almost impossible to do and so it is generally agreed that nothing can be proven.
Theory construction
A scientific theory is constructed by bringing together ideas and definitions in a logical way to explain and describe a specific event or a relationship between events. A researcher can then use the theory to make specific predictions about the outcome of their investigation, this is known as the hypothesis
Hypothesis testing
It is important for the scientific process that a hypothesis is clear and testable and that an appropriate experimental method is used to test the hypotheses.
Paradigms and paradigm shifts
Within the scientific process a collective set of assumptions, concepts, values and practices is known as a paradigm. Over time the paradigm may be brought into question by further research as new ways of looking at the same information are adopted, a paradigm shift has occurred.
Validity
The term validity is one of the most important concepts within scientific research as it asks whether any effect or conclusions found are genuine. Validity is broken down into two types internal and external
Internal validity
Whether or not the test or experiment measuring what it is meant to be measuring - researchers need to be sure that any effect or change to the dependent variable (DV) occurred as a direct result of the independent variable (IV).
Assessed in the following ways -
- Face validity –Are we measuring what we think we are measuring? In its simplest form does the research make sense?
- Concurrent validity - how well a particular test compares with a previously validated measure? For example, testing a group of students for intelligence, with an IQ test and then performing the new intelligence test a couple of days later and achieving the same results would be an indication of the internal validity of the new test.
External validity
Can the observed effect or conclusion be applied accurately to the real world? Research findings should be valid outside the research situation and could be used to explain other situations, especially “everyday” situations.
Assessed by -
- Ecological validity – can the findings be generalised to situations outside the environment created by the researcher? If the research task is similar to a real life situation it is likely that it will have high ecological validity.
- Temporal validity - how does the time period in which the research was carried out affect the findings? For example research into attitudes carried out in the 1960’s may not have the same relevance today.
Improving validity
Internal validity can be improved by carefully controlling all other variables that are not being manipulated within the experiment.
This is done by:
- using standardised instructions and procedures to make sure that conditions are the same for all the participants in the research
- eliminating demand characteristics and investigator effects, both of which affect the way a participant behaves within the experiment, known as participant reactivity
The External validity of psychological research can be improved by making sure that experiments are set in a more natural setting and involve real-life situations and also through using random sampling to select participants
Key terms
Demand characteristics:
Cues in the research situation that might reveal the research hypothesis
Investigator Effects:
The way in which the researcher behaves may give participants clues about the research hypothesis making them behave in a certain way
Participant Reactivity:
They way participants respond to the demands of the research situation
Random Sampling:
A method of choosing participants that gives every member of the population an equal chance of being selected
Reliability
Internal reliability
How consistent is the measure within the research situation itself?
- Researchers need to be sure that all parts of a research study are contributing equally to what is being measured.
This is assessed by -
- Split-half method – the results of one half of the test are compared to the other half. The same or similar results displayed in both halves means that the test has internal reliability.
External reliability
How consistent is the measure when it is repeated?
- Researchers need to be sure that if the study was repeated on the same participants over a period of time or if it was used to test others in the same situation it would prove reliable.
It is tested in the following ways -
1. Test-retest method – the participants are given the same test on two separate occasions. If the same or similar results are found then the test has external reliability.
2. Inter-observer method – the researcher compares their estimation of behaviour (rating) to the independent rating given by another observer. If their estimations are the same or similar then the test is said to have external reliability.
Improving reliability
Reporting psychological investigations
Features of a psychological report
Contents Page
- Every page must be numbered and every section must be recorded along with the page number on the contents page.
Abstract - is a self-contained and brief summary of the main points of the report. It enables the reader to quickly determine whether the contents are likely of into be of interest to them. Abstracts should be approximately 100 words long and should contain a brief summary from each section of the report.
You need to include: -
- A one-sentence summary, giving the topic to be studied (aim and main background study)
- Description of participants and sampling technique
- Description of procedure
- Description of results (quote statistics and significance level)
- What does it mean? Conclusion, implications and suggestions for future research.
Introduction - should provide the following information; The introduction is an important part of the report and provides a review of the background information, reviewing existing findings and methodological issues relevant to your study leading to how the current aims and hypothesis has been derived, ending with a precise statement of the aims of your investigation and your operationalised experimental or alternate hypothesis.
Features of a psychological report cont.
Procedure or Method - this section describes in detail how the experimenter carried out the investigation. In order to do so it should include the following four sub-sections and each should be sub-titled and each should contain enough information for full replication to be possible:
- Design – type of experimental design, the IV and DV and the experimental hypothesis.
- Participants – how many, how they were chosen and any other important factors, such as age, gender, nationality, occupation, etc.
- Apparatus and materials - describes the apparatus and materials used in testing the participants, such as a simulator, stopwatch or CD player.
- Procedure – This needs to explain exactly what you did from start to finish, including any pilot studies, design of stimulus material etc. Remember to refer the reader to the appendices for standardised instructions, debrief, copies of stimulus materials etc.