Midterm #1 Flashcards

(162 cards)

1
Q

Aristotle’s Contribution (350 BCE)

A

In On Memory and Reminiscences, Aristotle emphasized that memory relies on associations formed through experience. He believed recall occurs when ideas are linked by similarity, contrast, or contiguity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

British Associationism (Locke, Hume, Mill)

A

Claimed knowledge originates from linking co-occurring experiences (e.g., doctor–nurse–hospital). Emphasized experience over innate ideas but lacked scientific methods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

German Physicists’ Influence

A

Helmholtz (trichromatic color theory) and Fechner (psychophysics) introduced experimental methods to study perception. Their work linked physical stimuli with sensory experience—an early step toward cognitive psychology.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The Core Problem

A

Psychologists couldn’t directly observe the mind—only stimuli, responses, or physiology. This gap led to debates about how to study mental processes scientifically.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Four Historical Solutions

A

Introspection, Behaviorism, Cognitivism, and Neuroscience—each offering a different way to address the challenge of studying the unseen mind.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Introspection (Wundt & Titchener)

A

Involved “looking inward” to describe conscious experiences. Wundt catalogued sensations (e.g., ~38,850 for the eye), but the method was subjective and lacked reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Problems with Introspection

A

Hard to verify, private not public, and focused on end results rather than mental processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Behaviorist Reaction

A

Rejected unobservable mental events. Focused only on stimuli and responses (“black box” model). Figure:
🟦 Stimulus → [Mind ignored] → Response 🟦

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

John B. Watson

A

Founded behaviorism (1913 Behaviorist Manifesto). Believed psychology should study only observable behavior; known for the Little Albert experiment and innovations in advertising.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Thorndike’s Law of Effect (1913)

A

Behaviors followed by a satisfying state become strengthened, while those followed by punishment are weakened. Basis for instrumental conditioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

B.F. Skinner

A

Developed operant conditioning using reinforcement and punishment. Studied rats and pigeons in Skinner boxes—behavior controlled by environmental contingencies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Edward Tolman (1948) and internal representation

A

Discovered that rats formed cognitive maps of mazes. They could navigate even when conditions changed (e.g., swimming through). Suggested internal representations guide behavior—opposing strict behaviorism.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Hippocampus & Place Cells

A

Later research (1970s) found neurons in the hippocampus that fire in specific locations—place cells—supporting Tolman’s idea of internal spatial maps.
📊 Figure: Eight-arm radial maze (bars showing inward vs. outward movements).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Problems with Behaviorism

A

Couldn’t explain complex human behavior (language, reasoning). The “only observables” rule excluded scientific study of thought. WWII applied psychology failures (e.g., attention) led to new approaches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Cognitive Revolution (1950s–1960s)

A

Shifted focus back to mental processes. Key events:

1956 MIT Conference (Miller, Chomsky, Newell, Simon)

Broadbent’s Information Processing model (1958)

Sternberg’s memory scanning (1966)

Neisser’s Cognitive Psychology (1967)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Cognitive Psychology Definition

A

The scientific study of mental processes such as perception, attention, memory, language, reasoning, problem-solving, and decision-making.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Computational View of Mind

A

The mind processes information like a computer program—input (stimulus) is transformed and output (response) results. Psychologists study the structure of these mental programs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Assumptions of Cognitive Approach

A

Mental processes are analyzable into parts. Understanding each component (e.g., encoding, retrieval) helps explain the full behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Donders (1868) and Mental Chronometry

A

Measured the time course of mental events by comparing reaction times.
📈 Figure: Stimulus → Detection → Decision → Response.
Used subtraction method to estimate the duration of each stage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Reaction Time Tasks

A

Simple RT: Respond to one stimulus.

Choice RT: Different responses for different colors.

Go/No-Go RT: Respond to some stimuli but not others.
Used to infer processing time between stimulus and response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Subtraction Method

A

Estimates duration of mental stages by subtracting reaction times of tasks differing by one process.
Example: Choice RT (410 ms) – Go/No-Go RT (340 ms) → 70 ms for response selection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Hick’s Law

A

Reaction time increases with the number of choices—more possible responses mean longer decision-making time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Information Processing Stages

A

Each stage receives, transforms, and sends information.
📊 Figure: Stimulus → Processing → More Processing → Response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Stages of Memory

A

Encoding, Storage, and Retrieval—the three sequential phases in memory processing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Contributions of Donders and Chronometry
Introduced the idea that mental processes can be measured and occur in stages. Modern neuroimaging still uses similar subtraction-based logic.
26
Overall Summary
Cognitive psychology emerged as the study of how the mind receives, processes, and responds to information. It built on failures of introspection and behaviorism, merging scientific method with internal mental models.
27
Sensation vs. Perception
Sensation = stimulation of sense organs. Perception = selection, organization, and interpretation of sensory input. Perception turns raw sensory data into meaningful experience.
28
Psychophysics
The study of how physical stimuli are translated into psychological experience (e.g., Fechner’s methods relating stimulus intensity to perception).
29
Three Principles of Sensation and Perception
No one-to-one correspondence between physical and psychological reality. Both are active processes. Both are adaptive—they help organisms survive and act on their environment.
30
Distal vs. Proximal Stimulus
Distal stimulus: the actual object in the environment. Proximal stimulus: sensory information reaching receptors (e.g., retinal image). We only have direct access to proximal data and must infer the distal source. 📊 Figure: Distal Stimulus → Proximal Stimulus → Representation → Response.
31
Illusory Contours
Perceptual system fills in missing edges to create shapes that aren’t physically present. Shows perception constructs rather than merely records sensory input.
32
Perception as Induction
Perception infers and fills in gaps when sensory information is incomplete. It uses context and prior knowledge to create a coherent model of the world.
33
Bottom-Up vs. Top-Down Processing
Bottom-up (data-driven): starts from raw sensory input. Top-down (concept-driven): influenced by expectations, memory, and context. 📊 Figure: Sensation → Perception → Cognitive Processes (context, knowledge).
34
Top-Down Example
Ambiguous figure interpreted differently depending on reading direction (e.g., seeing the middle symbol as “B” in A B C vs. “13” in 12 13 14).
35
Parsing the World
Perception divides the visual field into figure and ground—the object of focus (figure) and its background (ground). This segmentation is key for recognition.
36
Gestalt Principles of Organization
Proximity: nearby elements grouped. Similarity: similar items grouped. Good Continuation: lines follow smooth paths. Closure: incomplete shapes are perceived as complete. 🖼️ Figure: Classic Gestalt demonstrations (e.g., circles forming squares).
37
Pattern Recognition
The process of identifying “What is this?” — translating sensory patterns into meaningful objects by matching input to stored representations in memory.
38
Template Matching Theory
Objects recognized by comparing input with stored templates. Works in limited domains (e.g., letter recognition) but unrealistic for variable real-world stimuli.
39
Feature Analytic Approach
Stimuli are broken into features, and objects are recognized by unique combinations of these. Example: “A” = /, , –, while “V” = /, . Reduces infinite variation to finite categories.
40
Physiological Evidence for feature detectors: Hubel & Wiesel (1960s)
Recorded neurons in primary visual cortex; discovered feature detectors that respond to lines, edges, and orientations—biological support for feature analysis.
41
Simple Cells
Respond to lines of a specific orientation in a precise location of the visual field.
42
Complex Cells
Respond to oriented lines in motion—less position-specific, sensitive to direction.
43
Hypercomplex (End-Stopped) Cells
Respond to specific orientation, motion, and length; stop firing if a line extends beyond their receptive field. Even more selective than complex cells.
44
Barlow’s Frog Feature Detectors (1953)
Found optic nerve cells selective for: Small fast-moving dots → triggers prey-catching. Large looming dark shapes → triggers escape. Shows early visual neurons are functionally specialized for adaptive behavior.
45
Hierarchical Processing in Vision
Visual cortex builds complexity: simple → complex → hypercomplex → object recognition (in monkeys and humans).
46
Feature Analysis in Audition
Speech perception relies on detecting distinctive features of sounds, like: Place of articulation (e.g., /k/ back, /p/ lips, /t/ teeth). Voice Onset Time (VOT) — delay between articulation and vocal cord vibration.
47
Voice Onset Time (VOT)
Unvoiced sounds (/p/, /t/, /k/) have late VOT (~25 ms); voiced sounds (/b/, /d/, /g/) have early VOT (~0 ms). Example: /b/ vs. /p/ differ only in voicing onset.
48
Phoneme Perception
Listeners perceive an abrupt category shift between /ba/ and /pa/ based on VOT, suggesting feature detectors for voicing.
49
Bottom-Up vs. Top-Down in Perception
Bottom-up = analyze features; Top-down = apply expectations and prior knowledge. Both interact in perception.
50
Word Superiority Effect
Letters are recognized more accurately in words than alone or in random strings. 📊 Figure: % correct highest for “Word” condition (Cattell, Reicher, Wheeler). Shows interaction of bottom-up and top-down processes.
51
Context Effects in Vision
Example: “T E C T” can be read as TEXT depending on context. Context guides recognition when input is ambiguous.
52
Cambridge Jumbled Word Demonstration
People can read jumbled words (e.g., “Aoccdrnig to a rscheearch…”) because perception processes whole words, not isolated letters—context-driven reading.
53
Context in Speech (Auditory Perception)
Pollack & Pickett (1964): isolated words from conversation recognized only 54%; full context raised recognition to near 100%. Shows speech perception relies on context.
54
Phonemic Restoration Effect (Warren, 1970)
Listeners “hear” missing phonemes replaced by noise (e.g., legilatures*) and can’t detect deletion—brain fills in missing sounds using context.
55
Samuels (1970) Context in Speech
When phonemes were replaced with noise, subjects couldn’t tell in real words but could in nonwords—context integrates missing info only when meaningful.
56
William James’ Definition of Attention (1890)
Attention is the focalization and concentration of consciousness—taking possession of one object or thought among many possible ones. It involves withdrawing from some things to focus on others.
57
Definition of Attention
The mental process of concentrating effort on an external or internal event. Attention allocates limited cognitive resources toward important information for effective processing.
58
Where Attention Fits in Cognitive Processing
Occurs between sensation, perception, decision/response selection, and response execution, influencing working and long-term memory. 📊 Diagram: Sensation → Perception → Attention → Decision → Response.
59
Metaphors of Attention
Filter – selective attention. Resource pool – divided attention. Spotlight/zoom lens – spatial attention. Each metaphor emphasizes a different way attention operates.
60
Cherry’s Dichotic Listening Task (1953)
Participants shadow one ear’s message while ignoring the other. Findings: noticed changes in gender, tone, or loudness, but not language—suggesting limited processing of unattended input.
61
Cocktail Party Phenomenon
In a noisy room, we can focus on one conversation while ignoring others—illustrating selective attention.
62
Moray (1959) Findings - evidence for an early filter
Even repeated words in the unattended ear (up to 35 times) were not remembered—evidence for an early filter that blocks unattended information before semantic processing.
63
Broadbent’s Early Selection Model (1958)
An early selection model: information passes through a filter that selects one channel for further processing before meaning is analyzed. 📊 Figure: Input → Filter → Detection → Recognition → Response.
64
Name Detection in Unattended Channel (Moray, 1959)
People noticed their own name in the unattended ear 33% of the time—suggesting some semantic leakage through the filter.
65
Triesman’s Attenuation Model (1960)
The filter is leaky, not all-or-none. Unattended information is attenuated (weakened), but meaningful stimuli (like one’s name) can still trigger awareness. 📊 Figure: Input → Imperfect Filter → Detection → Recognition.
66
Triesman’s Switching Evidence (1964)
During shadowing, participants sometimes switched ears mid-sentence (e.g., “I SAW THE GIRL…” continuing in the other ear), showing semantic processing of unattended material.
67
Deutsch & Deutsch Late Selection Model (1963)
All inputs are processed for meaning, but only the most relevant are selected for response. Filtering occurs after semantic analysis. 📊 Figure: Input → Detection → Recognition → Filter for Action → Response.
68
Corteen & Wood (1972) city study
Conditioned subjects to associate city names with mild shocks, then presented those names in the unattended ear. Result: significant GSR responses (38% to conditioned cities), indicating semantic processing of unattended info.
69
Attention as Resource Pools (1970s)
Attention = limited resource pool that can be divided among tasks. Performance trade-offs occur as task difficulty increases—basis for divided attention research.
69
Flexible Selection (Load Theory)
Whether attention acts early or late depends on processing load: High load: early selection (limited capacity). Low load: late selection. Attention is flexible, not fixed at one stage.
70
Dual-Task Studies
Participants perform two tasks simultaneously (primary + secondary). As one task becomes harder, performance on the other declines, showing limited shared attentional capacity.
71
Cell Phones and Driving (Strayer & Johnston, 2001)
Drivers using cell phones (vs. radio) were 4× more likely to crash, missed more red lights, and reacted slower—especially under complex driving conditions.
72
Single vs. Multiple Resource Pools
Single-pool model: one general attention resource. Multiple-pool model (Wickens, 1984): separate resources for modalities (e.g., visual, auditory). Explains why some tasks interfere and others don’t.
73
Spotlight Metaphor of Visual Attention
Attention acts as a movable spotlight focusing on parts of the visual field. Enhances processing of information within its beam.
74
Overt vs. Covert Orienting
Overt: eye movements direct gaze to attended location. Covert: attention shifts without moving eyes (Posner). Both show where attention is directed, but they can be separated.
75
Yarbus (1967) Eye Movement Studies
Recorded eye movements while viewing scenes. Gaze patterns depended on task goals, showing attention and vision are goal-directed.
76
Posner’s Cueing Paradigm (1980)
Participants fixate centrally while a cue indicates target location. Valid cue: faster response (attention directed correctly). Neutral cue: baseline. 📊 Figure: Valid cues reduce reaction time (~40–60 ms).
77
Cueing Effects and Eye Movements
Posner’s paradigm shows attention shifts without eye movements. Eye movements and attention are correlated but separable processes.
78
Rubber Band Model of Attention & Eye Movements
Attention moves first to a new location; eyes follow ~100–200 ms later. Attention then shifts again, leading eye movement in a linked but flexible way.
79
Can Eyes Move Without Attention?
No — attention must precede eye movement. You can move attention without moving eyes, but not the reverse.
80
Inhibition of Return (IOR) – Posner & Cohen (1984)
After attention leaves a location, returning to it is temporarily inhibited—slower responses occur for previously attended spots. 📊 Figure: Valid location → Return → Slower RT.
81
Purpose of Inhibition of Return
Prevents attention from revisiting old locations; biases the visual system toward exploring novel stimuli—adaptive for efficient scanning.
82
Object vs. Location in IOR (Tipper, Driver, & Weaver, 1991)
IOR occurs for both objects and locations. When objects move, inhibition can travel with the object, showing attention can be object-based as well as spatial.
83
Three Major Roles of Attention
Filter: selects important info, blocks distractions. Resource: allocates limited mental effort. Spotlight: shifts across space, enhancing selected areas.
84
What are the main models of attention covered in the previous class?
Filter (Selective Attention): Proposes that only certain information passes through an attentional “filter.” Resource Pools (Divided Attention): Suggests that attentional capacity is limited and shared among tasks. Spotlight/Zoom Lens (Spatial Attention): Attention can be directed spatially like a beam of light, focusing on specific locations.
85
What are the main topics addressed in “Attention II”?
Constraining spotlight/zoom lens Controlled vs. automatic processing Visual search Sustained attention
86
What does the “spotlight” metaphor of visual attention describe?
It describes how attention acts like a beam of light, selectively illuminating parts of the visual field. Information within the spotlight is processed more efficiently, while information outside receives less processing.
87
What is meant by the “constraining spotlight”?
It refers to attention’s ability to narrow or widen its focus — zooming in on specific items or zooming out to encompass broader areas.
88
How is the constraining spotlight tested experimentally?
Using compatibility paradigms, such as the flanker task, where participants must respond to a central target while ignoring surrounding distractors (e.g., “HHSHH” vs. “SSSSS”).
89
What is the Flanker Effect?
Reaction times are slower for incompatible trials (e.g., HHSHH) compared to compatible trials (e.g., HHHHH), showing that nearby irrelevant information interferes with processing. Figure interpretation: Reaction time (RT) increases from compatible (~520 ms) to incompatible (~660 ms) trials. Indicates that attention must be constrained to the central target to avoid interference.
90
What did Gratton et al. (1988) show about constraining the attentional spotlight?
Fast trials: Attention was already constrained — little flanker interference. Slow trials: It took time to narrow attention to the center letter — more interference. This supports the idea that attention starts diffuse and becomes focused over time.
91
What is the “flexible spotlight” model of attention?
Attention can take different shapes, not just a circular spotlight — for example, a “doughnut shape” (attending to outer areas while ignoring the center). Based on Müller & Hübner (2002), this shows attention can be flexibly allocated across space.
92
What are the main differences between automatic and controlled processing?
Automatic: Involuntary, fast, effortless, developed through practice, requires little attention. Controlled: Voluntary, slow, effortful, requires attentional resources, open to introspection.
93
What did Posner & Snyder (1975) conclude about automatic vs. controlled processes?
They proposed a continuum rather than a dichotomy. Automatic processing occurs without intention and doesn’t consume resources, while controlled processing requires effort, attention, and can both facilitate and inhibit responses.
94
How does the Stroop task demonstrate automatic vs. controlled processing?
Task: Name the color of ink while ignoring the written word (e.g., the word “RED” printed in blue ink). Findings: Reaction times are slower for incongruent trials than for neutral trials, showing automatic word reading interferes with controlled color naming. Interpretation: Word reading is automatic, color naming is controlled. Figure interpretation: Neutral trials ≈ 750 ms Incongruent trials ≈ 900 ms
95
What are the two stages of visual processing according to attention research?
Preattentive processing: Fast, automatic, parallel, unlimited capacity. Operates across the whole visual field. Detects simple features like color, orientation, or shape. Attentive processing: Slow, serial, limited capacity. Used to integrate multiple features or analyze complex patterns.
96
What is the difference between feature search and conjunction search?
Feature search: The target differs by a single feature (e.g., red “L” among blue “L”s). Fast, automatic, parallel, unaffected by distractor number. Conjunction search: The target differs by a combination of features (e.g., red “L” among blue “L”s and red “T”s). Slow, controlled, serial, affected by distractor number. Figure interpretation: Feature search RT: Flat slope → independent of set size. Conjunction search RT: Increases with number of distractors.
97
What does Treisman’s Feature Integration Theory propose?
Early in perception, features (color, orientation, size, etc.) are processed in parallel and independently across the visual field. Attention is required to bind these features together into coherent object representations. Binding occurs only within the focus of attention and operates slowly and serially.
98
How does Treisman’s theory explain feature and conjunction searches?
Feature search: Detecting a single feature (like color) uses preattentive processing — no binding required. Conjunction search: Requires linking multiple feature maps (e.g., color + form) through focused attention, which takes more time.
99
What does Feature Integration Theory predict about targets defined by the absence of a feature?
Searching for a target defined by absence (e.g., “no color”) is more difficult than searching for one with a distinct feature, because the absence doesn’t produce a unique activation in a feature map.
100
What is attentional capture?
When irrelevant but noticeable stimuli automatically attract attention, even when they are not task-relevant. This leads to slower reaction times to the actual target. Figure interpretation: RT increases significantly when a distractor is present (≈ 1000 ms vs. 800 ms when absent).
101
What is sustained attention (vigilance)?
The ability to maintain focus on a task over extended periods. Performance typically declines over time due to fatigue or mind-wandering. Figure interpretation: Reaction time (RT) increases across blocks (e.g., from 320 ms to 400 ms). Mind wandering rate increases steadily across blocks (~0.5 → 4.0).
102
What are the orienting reflex and habituation (Sokolov, 1963)?
Orienting reflex: The automatic redirection of attention toward novel stimuli. Habituation: Decrease in orienting response after repeated exposure to the same stimulus. When a new stimulus appears, the orienting reflex reactivates.
103
What is sensory memory?
A short-lived “sensory trace” that remains after a stimulus is gone, allowing brief retention of sensory information for integration into perception and memory.
104
Why is sensory memory necessary?
It allows for the temporary retention of sensory information so that successive stimuli can be integrated into a coherent perceptual experience.
105
Which sensory modalities are most studied in sensory memory research?
Visual (iconic memory) and auditory (echoic memory).
106
How long does sensory memory typically last?
On the order of milliseconds to a few seconds, depending on the modality.
107
What is iconic memory?
The brief visual sensory memory that stores a snapshot of visual input after a stimulus is gone.
108
What did Sperling want to know about iconic memory?
How much information can be retained in visual sensory memory and how quickly it is lost.
109
Describe the setup of Sperling’s iconic memory experiment.
Participants fixated on a cross, then briefly (50 ms) viewed an array of 12 letters (3x4 grid). Afterward, a tone (high, medium, low) cued which row to report.
110
What was the “whole report” condition?
Participants tried to recall all letters in the array.
111
What were the results of the whole report condition?
Participants reported only about 4.5 items, though they felt they had briefly seen more.
112
What was the “partial report” condition?
Participants were cued to recall only one row of letters after the stimulus disappeared.
113
What did the partial report results show?
Performance was much better than in the whole report, implying that most of the array was briefly available in sensory memory before it faded.
114
What did Sperling conclude about visual sensory memory?
The visual system retains nearly all the visual information for a very brief time (~800 ms to 1 s), but it decays rapidly.
115
Why is the partial report superior to the whole report?
Because saying letters aloud takes time, and during that time, the sensory trace decays — partial report accesses the trace before it fades.
116
How long does iconic memory last?
Roughly 800 milliseconds to 1 second.
117
What type of memory store is iconic memory considered?
A rapidly decaying visual store.
118
What problem was identified in Sperling’s interpretation?
Some errors were phonological (e.g., reporting “E” instead of “P”), suggesting that sensory memory may interact with phonological coding, not purely visual storage.
119
What alternative explanation exists for the iconic trace?
It could be due to residual neural activity rather than an active memory store.
120
What does Sperling’s figure illustrate?
A visual array (e.g., X L K G / H Q P Y / E L D F) followed by cued recall from a specific row, demonstrating how visual afterimages support brief retention.
121
What is echoic memory?
The auditory counterpart to iconic memory; a short-lived store that briefly retains auditory information.
122
What kind of tasks are used to study echoic memory?
Serial recall tasks, where participants recall a sequence of auditory stimuli in order.
123
What is “serial position” in serial recall tasks?
The position of an item in the sequence, used to plot recall accuracy across the list.
124
Define “primacy effect.”
Superior recall for the first items in a list.
125
Define “recency effect.”
Superior recall for the last items in a list.
126
What is the modality effect?
The finding that the recency effect is larger when items are heard rather than seen.
127
What does the modality effect graph show?
Accuracy is higher at the end of the list (recency region) for auditory presentation compared to visual presentation.
128
What does the “Modality Effect” figure illustrate?
A serial position curve showing higher recall (proportion correct) for the last few items in the auditory condition than in the visual condition.
129
What is the suffix effect?
When an extra sound (the “suffix”) is added at the end of an auditory list, it reduces or eliminates the recency effect.
130
Give an example of a suffix effect experiment.
Participants hear a list like “absence, hollow, pupil...helmet, zero.” If told to recall after “zero,” the final item’s recall is reduced.
131
What determines whether a suffix interferes with memory?
The degree to which the suffix resembles speech or is interpreted as linguistic.
132
What does the suffix effect graph show?
With no suffix, recall accuracy stays high for the last list item; with a speech-like suffix, recency drops sharply.
133
What did Crowder and Morton propose about echoic memory?
That the modality and suffix effects arise because recall depends on information temporarily held in echoic memory (~2 seconds).
134
What happens to recency when echoic memory is disrupted?
The recency effect diminishes because the auditory trace is replaced or interfered with.
135
What does the “Monkey” figure represent?
Diagrams showing that auditory words like “monkey” are retained longer than visually presented words, and that adding a suffix like “zero” disrupts the final auditory trace.
136
What did Ayres (1979) find about the suffix effect?
The suffix effect only appears when the suffix is interpreted as speech — not for non-speech sounds.
137
What evidence supports that the suffix effect is linguistic rather than purely acoustic?
Signers and lip-readers show suffix effects when suffixes are perceived as linguistic gestures.
138
What are the two major types of sensory memory?
Iconic (visual) and echoic (auditory).
139
What did Sperling’s and Crowder’s studies demonstrate?
Sperling: rapid decay of visual information; Crowder: brief retention and interference of auditory information.
140
What question remains about sensory memory?
Whether other sensory modalities (like touch or smell) have analogous short-term stores.
141
What is mental imagery?
Experiencing a sensory impression in the absence of actual sensory input.
142
What are examples of mental imagery?
Visualizing how many windows are in your house or picturing an elephant’s ears.
143
What four cognitive problems does imagery pose?
Generation: How are images created? Inspection: How can information be accessed from an image? Maintenance: How are images kept active despite fading? Transformation: How can we mentally manipulate images?
144
What was the task in Shepard’s mental rotation study?
Decide whether two rotated 3D figures are the same or mirror images.
145
What did Shepards mental rotation study results show?
Reaction time increased linearly with the degree of rotation — as if participants mentally rotated one object to match the other.
146
What did the mental rotation figure show?
A set of rotated 3D objects and a line graph with reaction time rising proportionally to angular difference.
147
Describe Kosslyn’s image scanning experiment.
Participants memorized an object, then mentally “looked” from one part to another and judged whether a named part was present.
148
What did Kosslyn’s results show?
Reaction time increased with the distance between the parts — suggesting mental “scanning” occurs like real perceptual travel.
149
What do Kosslyn’s finding support?
That mental images have spatial properties similar to perception.
150
What is a depictive representation?
A representation that preserves visual/spatial relations — like an internal picture.
151
What are the limitations of depictive representations?
They cannot easily represent abstract or non-picturable concepts, and they depict specific instances, not general categories.
152
What is propositional representation?
Information represented as abstract relationships (a string of true statements) (e.g., “ON(Ball, Box)”), not tied to imagery.
153
Why do some researchers prefer propositional theories?
They fit better with computational models of cognition and can represent abstract relationships efficiently.
154
What was Pylyshyn’s (1981) main argument?
People follow “tacit knowledge” — they behave as if they’re scanning images, but the underlying representation is propositional.
155
What is the “tacit knowledge explanation”?
Participants know how perception works and simulate it, so imagery effects reflect expectations, not true spatial representation.
156
How do depictive theorists respond?
That imagery preserves spatial relations and activates visual brain areas like the visual cortex, indicating “re-perception.”
157
What brain evidence supports depictive imagery?
LeBihan et al. (1993): fMRI showed similar activation in visual cortex during both visual perception and imagination. Kreiman et al. (2000): Single neurons responded to both seeing and imagining the same objects.
158
What did LeBihan’s figure show for viewed vs imagined contexts?
Overlapping activation in occipital cortex when participants viewed vs. imagined a complex display.
159
What did Laeng & Sulutvedt find about pupils and imagery?
Pupils constrict when imagining bright scenes and dilate when imagining dark ones — even without actual light changes.
160
What does this finding suggest?
Imagery triggers physiological responses similar to real perception.
161
What does the “Pupil and Imagery” figure show?
Participants’ pupils changing size in response to imagined brightness levels while viewing a blank screen.