Operant Conditioning Flashcards

(230 cards)

1
Q

In classical conditioning, how is behavior controlled?

A

By the stimuli that precede it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In Pavlov’s experiment, what stimulus originally elicited salivation?

A

Food.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

After conditioning in Pavlov’s experiment, what stimulus elicited salivation?

A

The tone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why is classical conditioning considered reflexive?

A

Because presenting the stimulus automatically elicits the response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Do all behaviors occur reflexively, like classical conditioning?

A

No. Many behaviors are goal-directed and influenced by consequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Give examples of goal-oriented behaviors.

A

Going to a restaurant for food, turning on the radio for music, asking someone on a date hoping they accept.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What happens when a behavior does not produce the desired outcome?

A

The behavior is unlikely to be continued.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are operant behaviors?

A

Behaviors influenced by their consequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why is it called operant conditioning?

A

Because the response operates on the environment to produce a consequence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is another name for operant conditioning?

A

Instrumental conditioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is operant conditioning called instrumental conditioning?

A

Because the response is instrumental in producing the consequence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What did Thorndike believe about anecdotes of animal intelligence?

A

He argued anecdotes are unreliable and not scientific evidence of animal intelligence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why did Thorndike criticize stories of lost pets finding their way home?

A

Because people notice and report rare successes, but ignore the many times animals fail.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What was Thorndike’s criticism of some contemporary researchers like George John Romanes?

A

They relied on anecdotes and assumed animals used human-like reasoning without systematic evidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How did Thorndike believe animal intelligence should be assessed?

A

Through systematic investigation and experimentation, not anecdotes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What apparatus did Thorndike use to study animal learning?

A

Puzzle boxes in which cats had to escape to reach food.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How did cats first behave when placed in Thorndike’s puzzle box?

A

They engaged in random behaviors until they accidentally stepped on the treadle to escape.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What happened to cats’ performance in Thorndike’s puzzle box over repeated trials?

A

They gradually escaped more quickly, showing slow improvement rather than sudden insight.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What did the gradual improvement in cats’ escape times suggest?

A

Learning was due to trial and error, not flashes of insight.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is Thorndike’s Law of Effect?

A

Behaviors producing satisfying outcomes are strengthened (“stamped in”), while those producing annoying outcomes are weakened (“stamped out”).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

According to the Law of Effect, what determines if a behavior will be repeated?

A

Whether its consequences are satisfying or annoying.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Why is Thorndike’s Law of Effect significant in psychology?

A

It was a hallmark principle in learning theory and a foundation for operant conditioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Who later expanded the implications of Thorndike’s Law of Effect for behavior change?

A

B. F. Skinner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What apparatus did Skinner invent for studying operant conditioning?

A

The operant conditioning chamber, commonly called the Skinner box.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
In a standard Skinner box for rats, how does the rat earn food?
By pressing a response lever or bar to receive food pellets.
26
What is the “free operant procedure” in Skinner’s research?
A method where the animal freely emits a behavior (e.g., lever pressing) at any rate without being forced to respond at a specific time.
27
How does the free operant procedure differ from maze learning?
In maze learning, the experimenter initiates each trial, whereas in the Skinner box, the animal responds freely under controlled contingencies.
28
What device did Skinner design for pigeons?
An operant chamber where pigeons peck an illuminated plastic disc (response key) to earn food access.
29
Why did Skinner prefer the Skinner box over other learning procedures?
Because it produced regular, measurable patterns of behavior under controlled conditions.
30
What two categories of behavior did Skinner distinguish?
Respondent behavior and operant behavior.
31
What is respondent behavior, according to Skinner?
Involuntary, reflex-type behavior controlled by stimuli that precede it; often classically conditioned.
32
What is operant behavior, according to Skinner?
Voluntary-like behavior controlled by its consequences rather than preceding stimuli.
33
Whose work on puzzle boxes influenced Skinner’s focus on operant behavior?
Thorndike’s research and his Law of Effect.
34
What was Skinner’s lifelong research focus?
Investigating principles of operant conditioning and applying them to human behavior.
35
How did Skinner redefine Thorndike’s Law of Effect?
He avoided mentalistic terms like “satisfying” or “annoying” and focused on how consequences affect the future probability of behavior.
36
Why did Skinner avoid terms like “satisfying” or “annoying”?
Because they describe internal states inferred from behavior, which he considered unscientific speculation.
37
How does operant conditioning resemble natural selection?
Just as adaptive traits become more frequent across generations, adaptive behaviors that lead to favorable outcomes become more frequent within an organism’s lifetime.
38
What are the three components of operant conditioning?
(1) A response that produces a consequence, (2) the consequence that increases or decreases the response’s future probability, and (3) a discriminative stimulus signaling that the consequence is available.
39
Give an example of a response-consequence relationship in operant conditioning.
A rat presses a lever (response) and receives a food pellet (consequence), increasing lever-pressing in the future.
40
What is a discriminative stimulus in operant conditioning?
A signal that a particular response will produce a certain consequence, e.g., a tone indicating that lever pressing will yield food.
41
What is an operant behavior?
A class of emitted responses that produce specific consequences affecting their future frequency.
42
What does “emitted” mean in operant conditioning?
Behaviors appear more voluntary and flexible, unlike reflexive behaviors that are elicited by stimuli.
43
How is the sense of voluntariness in operant behavior explained?
It may be an illusion, since behavior is controlled by its consequences rather than true free choice.
44
Why are operants defined as classes of responses?
Because predicting a class of responses (e.g., lever pressing) is easier than predicting the exact form of a single response.
45
Give an example of an operant response class.
Lever pressing—whether done hard or soft, fast or slow, with left or right paw—all produce the same consequence of food.
46
What human example illustrates an operant class of responses?
Telling jokes—many different jokes or delivery styles can produce laughter, so all belong to the operant class of joke-telling.
47
In behaviorism, what term is typically used to refer to a specific instance of a behavior?
Response
48
In operant conditioning, what is the role of a consequence?
A consequence either increases/strengthens or decreases/weakens the frequency of a behavior.
49
What is a reinforcer?
A consequence that follows a behavior and increases its future frequency.
50
What is a punisher?
A consequence that follows a behavior and decreases its future frequency.
51
What two conditions must be met for an event to be considered a reinforcer?
(1) It follows a behavior, and (2) the future frequency of that behavior increases.
52
What two conditions must be met for an event to be considered a punisher?
(1) It follows a behavior, and (2) the future frequency of that behavior decreases.
53
What does the symbol 'SR' represent in operant conditioning diagrams?
Reinforcing stimulus (a reinforcer).
54
What does the symbol 'SP' represent in operant conditioning diagrams?
Punishing stimulus (a punisher).
55
What does the symbol 'R' represent in operant conditioning diagrams?
The operant response.
56
Give an example of an operant response reinforced by an SR.
A rat presses a lever (R) → food pellet (SR), which increases lever pressing.
57
Give an example of an operant response punished by an SP.
Jonathan tells a joke (R) → person frowns (SP), which decreases joke-telling.
58
From a behavior analysis perspective, is it correct to say an animal has been reinforced or punished?
No, only the behavior is reinforced or punished, not the animal itself.
59
Why is it better to say 'your behavior displeases me' rather than 'you displease me'?
Because it emphasizes that consequences affect behavior, which can change, rather than the person.
60
What do the terms 'reinforcer' and 'punisher' refer to?
The specific consequence used to strengthen or weaken a behavior.
61
What do the terms 'reinforcement' and 'punishment' refer to?
The process or procedure by which a consequence changes the strength of a behavior.
62
Example: Food pellet as a consequence — is it a reinforcer or reinforcement?
The food pellet itself is a reinforcer; the process of using it to increase lever pressing is reinforcement.
63
Example: Frown as a consequence — is it a punisher or punishment?
The frown itself is a punisher; the process of using it to decrease joke-telling is punishment.
64
How are reinforcers and punishers formally defined?
Entirely by their effect on behavior, not by how pleasant or unpleasant they seem.
65
When is a laugh considered a reinforcer for joke-telling?
Only if joke-telling increases after the laugh.
66
If a laugh leads to a decrease in joke-telling, what is it formally defined as?
A punisher.
67
Why might a teacher’s yelling at disruptive students act as a reinforcer rather than a punisher?
Because it increases disruption if students enjoy the attention or admiration it brings.
68
Why do behaviorists prefer the term 'reinforcer' over 'reward'?
'Reward' implies something pleasant, but reinforcers are defined by their effect, not by pleasantness.
69
Can 'reward' and 'reinforcer' sometimes be used interchangeably?
Yes, some behaviorists do this informally, but strict definitions avoid 'reward.'
70
Why is it important to check with your professor about terminology like 'reward'?
Some courses may accept informal usage, while others require strict behavioral definitions.
71
Besides punishment, what is another way a behavior can be weakened?
Through extinction — the withdrawal of reinforcement for a previously reinforced behavior.
72
Define extinction in operant conditioning.
The weakening of behavior by no longer reinforcing it.
73
Give an example of extinction with a child in a supermarket.
A child whining for candy stops when whining no longer results in candy.
74
Give an example of extinction with offensive jokes.
Someone stops telling offensive jokes if roommates no longer react with outrage.
75
How does extinction compare to punishment?
It is usually gentler but slower.
76
What type of reinforcement schedule is illustrated by a bakery clerk waiting for customers to arrive?
Variable Interval (VI). The operant response (waiting) is reinforced when a customer enters, but arrivals are unpredictable and based on time passing, not the clerk’s behaviour. ## Footnote Customers may come in clusters or with long gaps, so reinforcement occurs after variable time intervals.
77
What are the three essential components of the operant conditioning procedure?
Operant response (R), consequence (reinforcer or punisher, SR), and sometimes an antecedent stimulus (SD).
78
What is a discriminative stimulus (SD) in operant conditioning?
An SD is a stimulus in the presence of which responses will lead to consequences and in the absence of which they will not. It signals that a response will be followed by a reinforcer or punisher
79
Example — If a rat only receives food when pressing a lever while a tone is sounding, what is the tone?
The tone is the discriminative stimulus (SD).
80
Example — Jonathan is more likely to tell jokes when Susan is around because she laughs at them. What is Susan in this situation?
Susan is the discriminative stimulus (SD) for Jonathan’s joke-telling behavior.
81
Do discriminative stimuli elicit behavior in the same way as classical conditioning stimuli?
No. In operant conditioning, an SD does not elicit behavior but increases the probability that the operant response will occur. The organism emits the behavior.
82
What is the difference between “elicit” and “emit” in conditioning?
Classical conditioning → stimulus elicits behavior. Operant conditioning → organism emits behavior in the presence of an SD.
83
What is a three-term contingency?
The relationship between antecedent (SD), behavior (R), and consequence (SR). It can be remembered as ABC: Antecedent, Behavior, Consequence.
84
How can the three-term contingency (ABC) be simplified?
Notice something (A), do something (B), get something (C).
85
Example — “It’s 7 p.m., you turn on the TV, and you enjoy your favorite sitcom.” Label the ABC.
Antecedent = 7 p.m. (SD), Behavior = turn on TV (R), Consequence = enjoy sitcom (SR).
86
Example — “Dog sees you have popcorn, begs, and is given some.” Label the ABC.
Antecedent = presence of popcorn (SD), Behavior = begging (R), Consequence = receiving popcorn (SR).
87
What is a discriminative stimulus for punishment (SDP)?
A stimulus that signals that a response will be punished. ## Footnote Example: A cat sees a water bottle (SDP) and learns meowing will lead to being sprayed.
88
Example — A driver slows down when they see a police car because speeding leads to a fine. What is the police car?
The police car is a discriminative stimulus for punishment (SDP).
89
What is a discriminative stimulus for extinction (SΔ)?
A stimulus that signals the absence of reinforcement. Responses in the presence of SΔ will not be reinforced.
90
Example — A rat gets food for lever pressing when a tone is on, but no food when a buzzer sounds. What is the buzzer?
The buzzer is a discriminative stimulus for extinction (SΔ).
91
Can a stimulus act as both a discriminative stimulus (SD) and a conditioned stimulus (CS)?
Yes. ## Footnote Example: A tone is an SD for lever pressing → food (operant). At the same time, the tone becomes a CS for salivation (classical).
92
Key difference: Does a discriminative stimulus elicit or signal a behavior?
It signals reinforcement availability. In operant conditioning, the organism emits behavior; in classical conditioning, the stimulus elicits behavior.
93
Classical conditioning: How is behavior defined?
Involuntary, inflexible, elicited by the stimulus. Behavior is a function of what precedes it. Sequence = S → S → R.
94
Operant conditioning: How is behavior defined?
Voluntary, flexible, emitted by the organism. Behavior is a function of its consequences. Sequence = S → R → S.
95
What is the key question to ask to determine if conditioning is classical or operant?
Is the behavior mainly a function of what comes before it (classical), or what follows it (operant)?
96
What does 'Phone rings' represent in the three-term contingency?
Phone rings = SD (discriminative stimulus)
97
What does 'Answer phone' represent in the three-term contingency?
Answer phone = R (operant response)
98
What does 'Conversation with friend' represent in the three-term contingency?
Conversation with friend = SR (reinforcer)
99
What are the four basic types of operant conditioning contingencies?
Positive reinforcement, Negative reinforcement, Positive punishment, Negative punishment.
100
In operant conditioning, what does “positive” mean?
Something is presented or added after the behavior (does not mean “good”).
101
In operant conditioning, what does “negative” mean?
Something is removed or subtracted after the behavior (does not mean “bad”).
102
What does “reinforcement” mean in operant conditioning?
A consequence that strengthens behavior (increases likelihood of future occurrences).
103
What does “punishment” mean in operant conditioning?
A consequence that weakens behavior (decreases likelihood of future occurrences).
104
How do you determine which type of contingency is present? (2-step test)
1) Is something added (positive) or removed (negative)? 2) Does the consequence strengthen (reinforcement) or weaken (punishment) the behavior?
105
Example — You study and receive praise from your teacher. What type of contingency is this?
Positive reinforcement (praise is added, behavior strengthened).
106
Example — You take aspirin and your headache goes away. What type of contingency is this?
Negative reinforcement (headache removed, behavior strengthened).
107
Example — You speed and receive a fine. What type of contingency is this?
Positive punishment (fine added, behavior weakened).
108
Example — You fight with your sibling and lose TV privileges. What type of contingency is this?
Negative punishment (TV time removed, behavior weakened).
109
What symbols can be used for the four contingencies?
SR+ (positive reinforcement), SR– (negative reinforcement), SP+ (positive punishment), SP– (negative punishment).
110
What is positive reinforcement?
The presentation of a stimulus (usually pleasant or rewarding) following a response, which increases the future strength of that response.
111
Example of positive reinforcement in an animal lab?
Lever pressing → food pellet. Behavior increases because food is presented.
112
Why is it called “positive” reinforcement?
Because something is added or presented after the behavior (not because it is good).
113
What is negative reinforcement?
The removal of a stimulus (usually unpleasant or aversive) following a response, which increases the future strength of that response.
114
Example of negative reinforcement in an animal lab?
Lever pressing → shock ends. Behavior increases because shock is removed.
115
Why is it called “negative” reinforcement?
Because something is taken away after the behavior (not because it is bad).
116
Give real-life examples of negative reinforcement.
Taking aspirin → removes headache; Opening umbrella → stops rain; Turning on heater → escapes cold; Pretending to be sick → avoids exam.
117
How can turning on a heater be both positive and negative reinforcement?
It can be interpreted as escaping the cold (negative reinforcement) or obtaining warmth (positive reinforcement).
118
What are the two types of negative reinforcement behaviors?
Escape and Avoidance.
119
What is escape behavior in negative reinforcement?
Behavior that terminates an aversive stimulus after it has started (e.g., taking aspirin to stop a headache, opening umbrella in rain).
120
What is avoidance behavior in negative reinforcement?
Behavior that prevents an aversive stimulus before it occurs (e.g., opening umbrella before rain, faking illness to avoid exam).
121
What is positive reinforcement?
The presentation of a stimulus (usually pleasant or rewarding) following a response, which increases the future strength of that response.
122
Example of positive reinforcement in an animal lab?
Lever pressing → food pellet. Behavior increases because food is presented.
123
Why is it called “positive” reinforcement?
Because something is added or presented after the behavior (not because it is good).
124
What is negative reinforcement?
The removal of a stimulus (usually unpleasant or aversive) following a response, which increases the future strength of that response.
125
Example of negative reinforcement in an animal lab?
Lever pressing → shock ends. Behavior increases because shock is removed.
126
Why is it called “negative” reinforcement?
Because something is taken away after the behavior (not because it is bad).
127
Give real-life examples of negative reinforcement.
Taking aspirin → removes headache; Opening umbrella → stops rain; Turning on heater → escapes cold; Pretending to be sick → avoids exam.
128
How can turning on a heater be both positive and negative reinforcement?
It can be interpreted as escaping the cold (negative reinforcement) or obtaining warmth (positive reinforcement).
129
What are the two types of negative reinforcement behaviors?
Escape and Avoidance.
130
What is escape behavior in negative reinforcement?
Behavior that terminates an aversive stimulus after it has started (e.g., taking aspirin to stop a headache, opening umbrella in rain).
131
What is avoidance behavior in negative reinforcement?
Behavior that prevents an aversive stimulus before it occurs (e.g., opening umbrella before rain, faking illness to avoid exam).
132
A: Positive reinforcement (whining results in food being presented
and the behavior increases).
133
In operant conditioning, what contingency is illustrated by this example: A dog nips at a person’s hand when they reach toward it. The person quickly pulls their hand back, and the dog continues to nip in the future?
Negative reinforcement (the nipping removes the aversive stimulus of the hand reaching, and the behavior increases).
134
In operant conditioning, what contingency is illustrated by this example: A dog whines at someone’s feet during breakfast. The person feeds the dog, and the dog continues to whine at breakfast in the future?
Positive reinforcement (whining results in food being presented, and the behavior increases).
135
In operant conditioning, what contingency is illustrated by this example: Kyla cries, “I don’t feel like you love me.” Joe hugs her and reassures her. If Kyla cries more often when she feels insecure in the future, what contingency is this?
Positive reinforcement (crying results in a hug and reassurance being added, increasing the likelihood of crying when insecure).
136
In operant conditioning, what type of response terminates an aversive stimulus that has already begun?
Escape response.
137
In operant conditioning, what type of response prevents an aversive stimulus before it occurs?
Avoidance response.
138
What type of reinforcement maintains both escape and avoidance responses?
Negative reinforcement.
139
In operant conditioning, is turning down the heat because you are already too hot an example of escape or avoidance?
Escape response.
140
In operant conditioning, is turning down the heat before you become too hot an example of escape or avoidance?
Avoidance response.
141
What is positive punishment in operant conditioning?
The presentation of an aversive stimulus following a behavior, which decreases the future likelihood of that behavior.
142
Example of positive punishment in an animal lab?
Lever press → shock is delivered. Behavior decreases.
143
Real-life examples of positive punishment?
Talking back to boss → reprimand; Swatting at wasp → sting; Cat meows → sprayed with water.
144
Why is positive punishment sometimes confused with negative reinforcement?
Because people mistakenly call aversive stimuli 'negative reinforcers.' But presenting an aversive stimulus is actually positive punishment, not negative reinforcement.
145
What is negative punishment in operant conditioning?
The removal of a pleasant or rewarding stimulus following a behavior, which decreases the future likelihood of that behavior.
146
Examples of negative punishment?
Stay out past curfew → lose car privileges; Argue with boss → lose job; Tease sister → sent to room (time out).
147
Why is 'time out' an example of negative punishment?
Because the child loses access to positive reinforcers (social contact, play, attention) during the time out interval, reducing the behavior that led to it.
148
Example: Jonathan talks to another woman at a party. His girlfriend ignores him and he stops doing it. What type of punishment occurred?
Negative punishment (her withdrawal of attention removed a positive stimulus, reducing Jonathan’s behavior).
149
In the above scenario, what contingency operates on the girlfriend’s ignoring behavior?
Negative reinforcement. Her ignoring reduced Jonathan’s behavior she disliked, making her more likely to ignore him again in the future.
150
Why is punishment often tempting to use?
Because it is immediately effective in stopping unwanted behavior, which negatively reinforces the punisher’s behavior of using punishment.
151
Do behaviorists primarily emphasize punishment in behavior change?
No. Behaviorists, including Skinner (1953), emphasize positive reinforcement. Overuse of punishment and negative reinforcement is seen as a major societal problem.
152
Why might schooling be less effective when students study mainly to avoid failure?
Because the behavior is driven by negative reinforcement (avoiding failure) instead of positive reinforcement (gaining knowledge).
153
Why is complaining ineffective in relationships?
Complaining is punishment, which often fails. Expressing appreciation (positive reinforcement) when Samira does call is more likely to strengthen desired behavior.
154
What ratio of positive to negative interactions predicts a successful marriage?
About 5 to 1 (positive, pleasant exchanges outweighing negative ones).
155
Why do confusing operant contingency examples often arise?
Because they are worded in unusual or indirect ways (e.g., focusing on non-occurrence of behavior).
156
What contingency is: 'If you don’t clean your room, you won’t get dessert.'?
Negative punishment (loss of a positive reinforcer—dessert—if the child does not clean).
157
How can the 'clean your room/dessert' example be reframed more positively?
As positive reinforcement: 'If you clean your room, you can have dessert.'
158
Why is positive reinforcement wording usually better than negative punishment wording?
Positive reinforcement sounds more agreeable, is less aversive, and promotes a more positive atmosphere.
159
What two questions should you ask to identify confusing contingencies?
1) Was something added or removed? (positive/negative). 2) Did the behavior increase or decrease? (reinforcement/punishment).
160
Why must reinforcement and punishment always be defined in terms of future behavior, not present behavior?
Because they describe whether a consequence increases or decreases the future likelihood of a response, not whether it stops in the moment.
161
Why is Jonah's mother giving him the iPad NOT positive punishment?
Because although the tantrum stops in the moment, the iPad is desirable. If giving the iPad increases tantrums in the future, it is positive reinforcement, not punishment.
162
What type of contingency is likely occurring in the tantrum/iPad example?
Positive reinforcement of tantrum behavior (presentation of iPad increases likelihood of future tantrums).
163
Why do some students think braking at a red light is classical conditioning?
Because the behavior seems automatic, as if the red light (CS) elicits braking (CR).
164
What is the correct interpretation of braking at a red light?
It is operant conditioning. The red light is an SD signaling consequences. Braking is reinforced by avoiding tickets or accidents (negative reinforcement).
165
In the braking example, what is the discriminative stimulus, the response, and the reinforcer?
SD = red light, R = braking, SR– = avoiding accident or ticket.
166
In operant conditioning, what contingency is shown here: Alex burps in public, Stephanie gets angry, and Alex burps more often when around her?
Positive reinforcement (Stephanie’s anger was presented, and burping increased).
167
In operant conditioning, what contingency is shown here: Alex holds the door for Stephanie, she fusses over him, and he stops holding the door in the future?
Positive punishment (Stephanie’s fussing was presented, and door-holding decreased).
168
In operant conditioning, what contingency is shown here: Tenzing shares his toys, his mother stops criticizing him, and he shares toys more often?
Negative reinforcement (criticism was removed, and toy-sharing increased).
169
In operant conditioning, which type of reinforcement is more effective: immediate or delayed?
Immediate reinforcement is much more effective than delayed reinforcement.
170
Why is immediate reinforcement generally stronger than delayed reinforcement?
Because the reinforcer is directly associated with the target behavior, reducing the chance of reinforcing another behavior.
171
Example: You give a child a treat several minutes after playing quietly, while they are banging a stick. What might happen?
You may inadvertently reinforce the stick-banging instead of the quiet play.
172
Why is it often difficult to maintain a diet or exercise routine?
Because the benefits are delayed (weak reinforcers), while the pleasures of eating or relaxing are immediate (powerful reinforcers).
173
Why might studying plans fail in favor of watching TV or going out?
Immediate reinforcement from recreational activities outweighs the delayed reinforcement of good grades.
174
How short a delay can reduce the effectiveness of reinforcement in experimental settings?
A delay of even 30 seconds can greatly weaken reinforcement.
175
Do delayed reinforcers function the same way as immediate reinforcers?
Likely not. Delayed reinforcers may work via a different mechanism, often through rule-governed behavior in humans.
176
What is rule-governed behavior?
Behavior controlled by instructions or rules that describe delayed consequences (e.g., “If I study tonight, I’ll get a good grade next week”).
177
Why is immediate reinforcement especially important for young children and animals?
They have little or no language ability, so they cannot use rules to bridge the gap between behavior and delayed consequences.
178
What is a primary (unconditioned) reinforcer in operant conditioning?
An event that is innately reinforcing (unlearned), such as food, water, proper temperature, or sexual contact.
179
How is the effectiveness of many primary reinforcers tied to deprivation?
They are most effective when deprived (e.g., food when hungry, water when thirsty).
180
Can primary reinforcers be psychological rather than physiological?
Yes. ## Footnote Example: A boring environment creates psychological deprivation, so access to stimulation becomes reinforcing.
181
What is a secondary (conditioned) reinforcer?
An event that becomes reinforcing because it has been associated with a primary reinforcer or other secondary reinforcer.
182
Examples of secondary reinforcers in humans?
Good grades, nice clothes, a car, praise.
183
How can classical conditioning create a secondary reinforcer?
Example: Metronome paired with food → becomes a CS. Then metronome can reinforce lever pressing as a secondary reinforcer.
184
How can music serve as a secondary reinforcer?
Music associated with a romantic episode or youth can become reinforcing because of its positive associations.
185
How can discriminative stimuli function as secondary reinforcers?
Example: Tone signals food availability for lever pressing (SD). Later, the tone itself can reinforce another behavior (e.g., running in a wheel).
186
What is a generalized reinforcer (generalized secondary reinforcer)?
A secondary reinforcer associated with many other reinforcers (e.g., money, social attention).
187
Why is money considered a generalized reinforcer?
Because it can be exchanged for almost unlimited reinforcers such as food, clothing, entertainment, and social status.
188
Why is social attention a powerful generalized reinforcer?
It is usually associated with food, play, comfort, and security. Children may even misbehave to gain attention, even if it is negative.
189
How can punishment sometimes backfire because of generalized reinforcers?
Attention intended as punishment (e.g., scolding) may act as positive reinforcement, because attention itself is reinforcing.
190
What is a token economy in behavior modification programs?
A system where tokens are given immediately after desirable behaviors in settings like hospitals, prisons, or classrooms. Tokens are later exchanged for backup reinforcers (treats, activities, privileges).
191
Why do token economies work?
Tokens act as generalized reinforcers, motivating behavior by being exchangeable for a variety of backup reinforcers (similar to how money functions).
192
Can an event function as both a primary and a secondary reinforcer?
Yes. ## Footnote Example: A Thanksgiving dinner is a primary reinforcer (food) and also a secondary reinforcer if associated with pleasant memories of family.
193
Can behaviors themselves become secondary reinforcers?
Yes. ## Footnote Example: Helping others may become reinforcing in itself if repeatedly associated with praise, leading to altruistic behavior.
194
What is learned industriousness theory?
The idea that if effortful behavior is reinforced, hard work itself becomes a secondary reinforcer. ## Footnote Example: Rats reinforced for strong lever presses later run faster for food.
195
What happens if low-effort behavior is reinforced?
Animals and humans may perform poorly on later tasks, showing the opposite of learned industriousness.
196
What is intrinsic reinforcement?
Reinforcement provided by the mere act of performing the behavior itself. ## Footnote Example: Ice skating because it’s fun, solving puzzles because it’s satisfying.
197
What is extrinsic reinforcement?
Reinforcement provided by an external consequence to the behavior. ## Footnote Example: Studying to pass an exam, working for money, exercising to look good.
198
Is the distinction between intrinsic and extrinsic reinforcement always clear?
No. ## Footnote Example: Candy is intrinsic when eating for enjoyment, but extrinsic when given to strengthen another behavior (e.g., being quiet in the supermarket).
199
How can you distinguish intrinsic vs extrinsic motivation in ambiguous cases?
Focus on the behavior being strengthened. If the behavior itself is enjoyable → intrinsic. If it is performed for an external reward → extrinsic.
200
Example: Taking a bath because it feels relaxing vs. taking a bath to impress a date. Which is which?
Relaxing bath = intrinsic reinforcement. Bath to impress date = extrinsic reinforcement.
201
What is the overjustification effect in operant conditioning?
The idea that giving an extrinsic reward for an already intrinsically enjoyable activity can reduce intrinsic motivation for that activity.
202
Classic study: Lepper, Greene, & Nisbett (1973) found what about children drawing with magic markers?
Children promised and given a “Good Player” award later lost interest in drawing, while children who received the award unexpectedly did not lose interest.
203
When are extrinsic rewards most likely to undermine intrinsic motivation?
When rewards are expected, tangible (e.g., money), and given simply for performing the activity (not for quality).
204
When are extrinsic rewards less likely to harm intrinsic motivation?
When rewards are unexpected, verbal (e.g., praise), or given for high-quality performance.
205
What did Cameron & Pierce (1994) conclude from their meta-analysis of 96 studies?
That extrinsic rewards usually have little or no effect on intrinsic motivation; harmful effects are limited and avoidable.
206
What did Bandura (1997) argue about extrinsic rewards and intrinsic motivation?
That the dangers of extrinsic rewards undermining intrinsic motivation are greatly overstated.
207
What did Deci and colleagues argue in contrast to Cameron & Pierce?
They argued that extrinsic rewards generally undermine intrinsic motivation, supported by their own meta-analysis.
208
What did Cerasoli et al. (2014) conclude about intrinsic vs. extrinsic rewards?
That intrinsic and extrinsic reinforcers are not necessarily in conflict; both contribute to performance.
209
How did Skinner view intrinsic reinforcement in learning?
He argued that well-designed learning programs should make the act of learning and progress intrinsically reinforcing.
210
In Skinner’s 1983 story, how did two students increase their roommate’s appreciation of modern art?
They used positive reinforcement — giving attention only when he looked at art, arranging social praise, gallery invitations, and even incidental rewards. Eventually, he suggested visiting a museum and began painting.
211
What is a natural reinforcer in operant conditioning?
A reinforcer that is the typical, expected consequence of a behavior in that setting. ## Footnote Example: Money for selling merchandise, gold medals for athletic performance.
212
What is a contrived (artificial) reinforcer?
A reinforcer deliberately arranged to modify a behavior, not normally a consequence in that setting. ## Footnote Example: Allowing TV time after completing study.
213
Why do behaviorists prefer natural reinforcers over contrived reinforcers?
Natural contingencies produce more effective, lasting behavior patterns. Contrived reinforcers are usually meant to fade as natural consequences take over.
214
Example: A coach praises correct throwing actions in a quarterback. What reinforcer is more important long-term?
The natural consequence of where the ball lands, not the praise.
215
How are intrinsic vs. extrinsic reinforcers different from natural vs. contrived reinforcers?
Intrinsic vs. extrinsic = whether the behavior itself is reinforcing (intrinsic) or requires an external consequence (extrinsic). Natural vs. contrived = whether the reinforcer occurs naturally in the setting or is artificially arranged.
216
Why can it be difficult to classify reinforcers as natural or contrived?
Because some reinforcers are ambiguous. ## Footnote Example: Grades may be natural within a school system or contrived if viewed as artificially imposed.
217
What is the ultimate goal when using contrived reinforcers in behavior modification programs?
To have natural reinforcers (e.g., social approval, intrinsic enjoyment) eventually take over, so the contrived reinforcers can be withdrawn.
218
What is a natural reinforcer in operant conditioning?
A reinforcer that typically occurs for that behavior in that setting (e.g., money for selling merchandise).
219
What is a contrived reinforcer in operant conditioning?
A reinforcer that does not typically occur for that behavior in that setting but is deliberately arranged to modify behavior (e.g., using praise to increase studying).
220
Flipping a switch and the light comes on. Is the light coming on a natural or contrived reinforcer for switch-flipping?
Natural reinforcer (it is the typical consequence of flipping a switch).
221
You thank your roommate for helping with housework to encourage more helping. If it works, what type of reinforcer is 'thank you'?
Contrived reinforcer (deliberately arranged). It is also an extrinsic reinforcer (external to the behavior).
222
In applied behavior analysis, although contrived consequences may be used at first, the hope is that the behavior will eventually become ___ by ___.
Trapped by the natural consequences associated with that behavior.
223
In developing a powerful hockey slap shot, what is the most important consequence: coach’s praise or where the puck goes?
The natural consequence of where the puck goes and how fast it travels.
224
In operant conditioning, what is shaping?
Shaping is the gradual creation of new behavior through reinforcement of successively closer approximations to the desired behavior.
225
How can shaping be used to teach a rat to press a lever?
Reinforce the rat for standing near the lever → facing the lever → touching the lever → placing a paw on the lever → pressing the lever.
226
What procedure did animal trainer Karen Pryor recommend for teaching a dog to catch a frisbee?
First reinforce taking the frisbee from the hand → then jumping to take it → then catching it from a short toss → then from longer and longer throws, with praise each time.
227
Why do trainers often use a clicker or sound in shaping?
The sound has been paired with food to become a secondary reinforcer. It allows immediate reinforcement at a distance and reduces satiation by pairing with food intermittently.
228
Why is shaping considered a fundamental training procedure?
Because it allows complex or novel behaviors to be built step by step when the full behavior does not yet occur naturally.
229
How is shaping used in everyday human learning?
Parents praise early, poor attempts at eating with utensils, dressing, or writing, then gradually expect better performance. Adults also shape skills like driving, cooking, or sports based on feedback and natural consequences.
230
What type of reinforcement often maintains behavior during shaping outside formal training?
Natural consequences of performance (success or failure) provide reinforcement for gradual improvement.