Learning and Memory Flashcards

(51 cards)

1
Q

Functional Behavioral Assessment

A

An FBA is conducted to understand why a behavior occurs so that interventions can be designed to replace it with a more adaptive, functionally equivalent behavior.

Core Components
1. Clarify the Target Behavior
* Define the behavior in observable, measurable terms
* Identify its frequency, duration, intensity, and context
2. Identify Antecedents and Consequences
* Determine what happens before the behavior (antecedents)
* Determine what happens after the behavior (consequences)
* Analyze patterns to infer the function of the behavior (e.g., attention, escape, access to tangibles, sensory stimulation)
3. Identify the Behavior’s Function
* Understand why the behavior persists
* Functions typically fall into two broad categories:
* To obtain something (attention, items, stimulation)
* To avoid/escape something (tasks, demands, discomfort)
4. Select an Alternative Behavior
* Choose a functionally equivalent behavior that meets the same need
* Must be easier, more efficient, or more effective than the target behavior
5. Develop Function‑Based Interventions
* Modify antecedents
* Adjust consequences
* Teach and reinforce the alternative behavior
* Reduce reinforcement for the target behavior

EPPP Cue
* FBA = define behavior → analyze A‑B‑C → identify function → teach a replacement behavior that serves the same function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Functional Analysis (FA)

A

A Functional Analysis (FA) is a systematic, experimental procedure used to test hypotheses about the function of a behavior by manipulating antecedents and consequences.

Core Features
1. Experimental Manipulation
* Conditions are systematically varied (e.g., attention, escape, alone, play/control).
* The psychologist directly tests which contingencies evoke or maintain the behavior.
2. Establishes Functional Relations
* Identifies cause‑and‑effect relationships between environmental variables and the target behavior.
* Considered the gold standard for determining behavioral function.
3. High Internal Validity
* Because variables are manipulated, FA provides strong evidence for the function of behavior.
* Requires training, ethical safeguards, and controlled conditions.

When FA Is Used
* When FBA results are inconclusive
* When high‑stakes decisions require precise functional identification
* When behavior is severe and interventions must be highly targeted

EPPP Cue
* FBA = observe and infer
* FA = test and confirm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Stimulus Control

(Operant Conditioning)

A

Stimulus control occurs when a behavior is more or less likely to occur depending on the presence of specific environmental cues (discriminative stimuli).
The behavior “comes under the control” of those cues because of a history of reinforcement.

Types of Discriminative Stimuli
1. Positive Discriminative Stimulus (SD)
* Signals that reinforcement is available
* Increases the likelihood that the behavior will occur
* Example: A “Open” sign signals that entering the store will result in service.
2. Negative Discriminative Stimulus (S‑delta, SΔ)
* Signals that reinforcement will NOT be available
* Decreases the likelihood of the behavior
* Example: A “Closed” sign signals that entering the store will not result in service.

How Stimulus Control Develops
* Through repeated experiences where a behavior is reinforced in the presence of the SD
* And not reinforced in the presence of the SΔ
* Over time, the organism learns to respond selectively to the SD.

EPPP Cue
* SD = do the behavior; reinforcement available
* SΔ = don’t bother; reinforcement unavailable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Levels of Processing Model
(Craik & Lockhart)

A

Memory performance depends on the depth of processing, not on separate memory stores.
Deeper processing → stronger, more durable memory traces.

Three Levels of Processing
1. Structural (Shallowest)
* Focus on physical features of the stimulus
* Example: What the word looks like
* Produces weak memory retention
2. Phonemic (Intermediate)
* Focus on sound of the stimulus
* Example: What the word sounds like
* Produces moderate retention
3. Semantic (Deepest)
* Focus on meaning of the stimulus
* Example: Thinking about the word’s definition, associations, or use in a sentence
* Produces the strongest and most durable memory

Why Semantic Processing Works Best
* Involves elaboration, integration, and meaning‑based encoding
* Creates richer retrieval cues
* Leads to superior long‑term recall

EPPP Cue
* Depth, not storage → semantic = deepest = best retention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Self-Control Therapy (REHM)

A

Self‑Control Therapy is a brief, cognitive‑behavioral treatment for depression based on the premise that deficits in three self‑regulatory processes increase vulnerability to depressive symptoms.

Three Components of Self‑Control
1. Self‑Monitoring
* Depressed individuals tend to:
* Attend selectively to negative events
* Focus on short‑term rather than long‑term consequences
* Treatment goal: Increase awareness of positive events and adaptive behaviors.
2. Self‑Evaluation
* Depressed individuals often:
* Use unrealistically high standards
* Make negative attributions about their performance
* Treatment goal: Develop more balanced standards and fair self‑appraisal.
3. Self‑Reinforcement
* Depressed individuals frequently:
* Fail to reward themselves for accomplishments
* Engage in self‑punishment
* Treatment goal: Increase self‑reinforcement and reduce self‑punitive behaviors.

Therapeutic Focus
* Shift attention toward positive behaviors and outcomes
* Modify self‑evaluation criteria
* Strengthen self‑reward patterns
* Reduce depressive symptoms by improving self‑regulation

EPPP Cue
* Depression = deficits in self‑monitoring, self‑evaluation, and self‑reinforcement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Insight Learning (Köhler)

A

nsight learning is the sudden realization of how elements in a problem fit together — the classic “aha!” experience.
It reflects a shift in understanding rather than gradual trial‑and‑error learning.

Key Features
* Suddenness: The solution appears abruptly after a period of cognitive reorganization.
* Understanding of Relationships: The learner grasps how components of the problem connect.
* Not Trial‑and‑Error: Unlike operant conditioning, insight does not emerge from incremental reinforcement.
* Transferable: Once insight occurs, the solution can often be applied to similar problems.

Köhler’s Research
* Conducted with chimpanzees on Tenerife.
* Chimps solved problems (e.g., stacking boxes, using sticks) by restructuring the situation mentally.
* Demonstrated that animals can engage in cognitive problem‑solving, not just conditioning.

EPPP Cue
* Insight = sudden cognitive restructuring → “aha!” moment (Köhler, chimps).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Procedural And Declarative Memory

What are the major components of long‑term memory, and how do procedural, semantic, and episodic memory differ?

A
  • Long‑term memory consists of procedural and declarative components.
  • Procedural memory stores information about how to do things (“learning how”), such as motor skills and habits.
  • Declarative memory mediates the acquisition of facts and other information (“learning that or what”).
  • Declarative memory is subdivided into:
  • Semantic memory: general knowledge that is independent of context, responsible for storing facts, rules, and concepts.
  • Episodic memory: memory for personally experienced events, including the time, place, and context in which they occurred.

examples:
* Procedural: Knowing how to ride a bike or type on a keyboard.
* Semantic: Knowing that Paris is the capital of France or that 2+2=4.
* Episodic: Remembering your last birthday party or your first day of graduate school.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Law Of Effect
(Thorndike)

A

Behaviors followed by satisfying consequences are more likely to be strengthened and occur again, while behaviors followed by discomfort are less likely to be repeated.
This principle laid the groundwork for operant conditioning.

Thorndike’s Puzzle Box Experiments
* Hungry cats were placed in puzzle boxes.
* To escape and access food, each cat had to perform a specific behavior (e.g., pulling a loop, pressing a lever).
* Over repeated trials, cats gradually learned the correct response.
* Learning occurred through trial and error, not insight.
Key takeaway:
Behaviors that successfully led to escape and food were stamped in; ineffective behaviors were stamped out.

Why It Matters
* Introduced the idea that consequences shape behavior.
* Preceded and influenced Skinner’s development of operant conditioning.
* Demonstrated that learning can be incremental, not sudden.

EPPP Cue
* Law of Effect = satisfying consequence → behavior increases (trial‑and‑error learning).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Overcorrection

(Operant Technique)

A

Overcorrection is a punishment‑based behavior reduction procedure in which the individual must correct the effects of their misbehavior and/or practice appropriate alternative behaviors.
It reduces undesirable behavior by requiring effortful, corrective action.

Two Forms of Overcorrection
1. Restitutional Overcorrection
* The individual must restore the environment to its original state and often improve it beyond that.
* Example: A child who scribbles on a desk must clean all desks in the area, not just their own.
Purpose:
Increase the response cost so the misbehavior becomes less likely.

  1. Positive Practice Overcorrection
    * The individual must repeatedly practice the correct or alternative behavior.
    * Example: A student who runs in the hallway must walk the hallway correctly several times.
    Purpose:
    Strengthen the appropriate behavior through repetition.

Additional Features
* May require close supervision to ensure correct performance
* May involve physical guidance to prompt the corrective behavior
* Works best when applied immediately, consistently, and calmly

EPPP Cue
* Overcorrection = restitution (fix it) + positive practice (do it right repeatedly).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Multi-Component Model (Badeley And Hitch)

A

Working memory is a limited‑capacity system responsible for the temporary storage and manipulation of information needed for complex cognitive tasks such as reasoning, comprehension, and learning.
It consists of a central executive and three subsystems.

  1. Central Executive (Attentional Control System)
    * The primary and most important component
    * Functions as an attentional controller
    * Responsible for:
    * Directing attention to relevant information
    * Suppressing irrelevant information
    * Switching between tasks
    * Coordinating the three subsystems
    * Associated with frontal lobe functioning
    EPPP cue:
    Central Executive = attention + coordination.
  2. Phonological Loop
    * Handles verbal and auditory information
    * Two parts:
    * Phonological store (“inner ear”)
    * Articulatory rehearsal system (“inner voice”)
    * Supports language acquisition and rehearsal of verbal material
    EPPP cue:
    Phonological = sound + words.
  3. Visuo‑Spatial Sketchpad
    * Stores and manipulates visual and spatial information
    * Used for mental imagery, navigation, and visual pattern retention
    EPPP cue:
    Sketchpad = visual + spatial.
  4. Episodic Buffer
    * Integrates information from the phonological loop, visuo‑spatial sketchpad, and long‑term memory
    * Creates coherent, multi‑modal episodes
    * Limited capacity
    * Added later to address integration and binding of information
    EPPP cue:
    Episodic Buffer = integrator + binder.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

EMDR (Eye Movement Desensitization And Reprocessing)

A

EMDR is a structured treatment originally developed for PTSD, combining bilateral stimulation (often rapid lateral eye movements) with exposure‑based processing of traumatic memories.

Key Components
* Bilateral stimulation: typically rapid eye movements, but may include tapping or tones
* Exposure: client focuses on traumatic images, thoughts, and sensations
* Cognitive elements: restructuring maladaptive beliefs
* Behavioral elements: desensitization through repeated exposure
* Psychodynamic elements: accessing and processing underlying affect and meaning

Mechanism of Action
* Although EMDR emphasizes eye movements, research suggests:
* The active ingredient may be exposure to the feared memory,
* Leading to extinction of the conditioned fear response
* Eye movements may contribute to dual attention or working memory taxation, but are not required for therapeutic benefit in many studies.

Clinical Applications
* PTSD (primary evidence base)
* Also used for:
* Anxiety disorders
* Phobias
* Depression
* Grief
* Other trauma‑related conditions

EPPP Cue
* EMDR = exposure + bilateral stimulation; effectiveness likely due to exposure/extinction rather than eye movements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Systematic Desensitization
(Wolpe)

A

Systematic desensitization is a behavioral treatment for anxiety and phobias based on counterconditioning (reciprocal inhibition)—pairing anxiety‑evoking stimuli with a physiological state incompatible with anxiety, typically relaxation.

Key Components
1. Anxiety Hierarchy
* Client and therapist create a graded list of anxiety‑provoking situations
* Arranged from least to most distressing
2. Relaxation Training
* Client learns a deep relaxation technique (e.g., progressive muscle relaxation)
* Relaxation serves as the inhibitory response to anxiety
3. Gradual Pairing
* Client imagines or encounters each item on the hierarchy
* While maintaining relaxation, preventing the anxiety response
* Progresses stepwise until the most anxiety‑evoking item can be faced calmly

Mechanism of Action: What Research Shows
Although Wolpe conceptualized the technique as counterconditioning, dismantling studies indicate:
* Extinction—repeated exposure to the feared stimulus without the feared outcome—
is the primary mechanism behind its effectiveness
* Relaxation may help with tolerability, but exposure drives the change

EPPP Cue
* Systematic desensitization = hierarchy + relaxation + exposure; effectiveness due to extinction, not relaxation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Stimulus Discrimination And Experimental Neurosis

A

A procedure in classical conditioning used to reduce stimulus generalization by teaching the organism to produce the CR only to the original CS and not to similar stimuli.
How It Works
* Present the CS followed by the US
* Present similar stimuli without the US
* Over time, the organism learns to differentiate between the CS and non‑CS cues
Outcome
* CR occurs only to the CS
* Generalization decreases
* The organism becomes more precise in its responding
EPPP cue:
Discrimination training = respond only to the true CS.

Experimental Neurosis
Core Idea
When discrimination tasks become too difficult, ambiguous, or conflicting, the organism may show breakdown in learned behavior and exhibit neurotic‑like responses.
Typical Behaviors
* Restlessness
* Aggressiveness
* Avoidance
* Fear or agitation
* Disorganized responding
Why It Happens
* The organism cannot reliably distinguish between stimuli
* The conflict produces frustration, leading to emotional and behavioral disruption
* First demonstrated by Pavlov when dogs struggled to discriminate between highly similar shapes
EPPP cue:
Impossible discrimination → emotional/behavioral breakdown.

High‑Yield Link
* Stimulus discrimination training aims to sharpen responding
* Experimental neurosis emerges when the discrimination becomes too fine to maintain

⭐ Example: Stimulus Discrimination Training
A dog is conditioned to salivate to a tone at 500 Hz because it has been repeatedly paired with food (CS → US).
To teach discrimination, the experimenter:
* Continues pairing 500 Hz tone with food
* Presents a 450 Hz tone without food
Over time, the dog salivates only to the 500 Hz tone and not to the 450 Hz tone.
This is stimulus discrimination.

⭐ Example: Experimental Neurosis
The experimenter then makes the discrimination task progressively harder:
* CS+: 500 Hz tone → food
* CS−: 498 Hz tone → no food
As the tones become nearly identical, the dog can no longer reliably tell them apart.
Eventually, the dog begins to show:
* Restlessness
* Whining
* Avoidance
* Agitation or aggression
The dog’s behavior becomes disorganized and emotional because the discrimination is too difficult.
This is experimental neurosis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Biofeedback

A

Biofeedback uses electronic monitoring devices to give individuals immediate, continuous information about a physiological process (e.g., muscle tension, skin temperature, heart rate, blood pressure).
The goal is to help the individual gain voluntary control over that process.

How It Works
* Sensors measure a physiological activity
* Real‑time feedback (visual, auditory, tactile) is provided
* The individual practices strategies (often relaxation‑based) to modify the physiological response
* Over time, voluntary control increases
EPPP cue:
Biofeedback = feedback → control.

Effectiveness
For many conditions, biofeedback and relaxation training are about equally effective, including:
* Hypertension
* Tension headaches
* General stress‑related arousal
This is a classic exam point: biofeedback is not universally superior.

Treatment‑of‑Choice Applications
1. Raynaud’s Disease
* Thermal biofeedback is the treatment of choice
* Goal: increase peripheral temperature by improving blood flow
* Highly supported in behavioral medicine literature
2. Migraine Headaches
* Thermal biofeedback + autogenic training is effective
* Autogenic training = self‑generated relaxation through passive concentration on bodily sensations
* Combination improves vascular regulation and reduces migraine frequency

EPPP Cue
* Biofeedback = voluntary control of physiology.
* Thermal biofeedback → Raynaud’s.
* Thermal + autogenic → migraines.

Raynaud’s Disease
Core Idea
Raynaud’s Disease is a vasospastic disorder involving episodic constriction of small arteries, usually in the fingers and toes, triggered by cold or emotional stress.
Episodes cause pallor → cyanosis → redness, often with pain or numbness.

Key Features
* Vasoconstriction of peripheral blood vessels
* Cold sensitivity
* Color changes in extremities (white → blue → red)
* Numbness, tingling, or pain during attacks
* More common in women
* Can be primary (idiopathic) or secondary to autoimmune or connective‑tissue disorders

Behavioral Treatment
⭐ Thermal Biofeedback = Treatment of Choice
* Goal: increase peripheral temperature by improving blood flow
* Helps patients learn to warm their hands through voluntary control of vasodilation
* Strong evidence base in behavioral medicine

Why Thermal Biofeedback Works
* Provides real‑time feedback on finger temperature
* Allows patients to practice relaxation and vasodilation
* Reduces frequency and severity of attacks

Related High‑Yield Point
* For migraine headaches, the best approach is thermal biofeedback + autogenic training
* For hypertension and tension headaches, biofeedback is about as effective as relaxation

EPPP Cue
* Raynaud’s → cold‑induced vasospasm → thermal biofeedback is the treatment of choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Interference Theory (Retroactive and Proactive Interference)

According to interference theory, what causes forgetting, and what is the difference between retroactive and proactive interference? Include examples.

A

Forgetting occurs because previously or subsequently learned information disrupts learning or recall.

  • Retroactive interference: new learning interferes with recall of old information.
  • Proactive interference: old learning interferes with learning or recall of new information.

Simple examples:
* Retroactive: You learn a new password at work and now can’t remember your old one.
* Proactive: You keep typing your old phone number when trying to memorize your new one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Behavioral Model (Lewinsohn)

A

Depression develops when a person experiences a low rate of response‑contingent reinforcement — meaning their behaviors are not followed by sufficient rewarding or positive outcomes.
This low reinforcement can come from the environment, the individual, or both.

Sources of Low Reinforcement
1. Inadequate Reinforcing Stimuli in the Environment
* Few opportunities for positive experiences
* Loss of important roles, relationships, or activities
* Environments that are punishing, deprived, or non‑responsive
2. Skill Deficits in Obtaining Reinforcement
* Social skill deficits
* Reduced activity level
* Avoidance behaviors
* Difficulty initiating or maintaining rewarding interactions
EPPP cue:
Low reinforcement = low activity = more low reinforcement → depression spiral.

Behavioral Consequences
* Decreased engagement in potentially reinforcing activities
* Increased withdrawal and inactivity
* Fewer opportunities for positive feedback
* Worsening mood and further reduction in behavior
This creates a self‑perpetuating cycle.

Treatment Implications
* Behavioral activation: increase contact with reinforcing activities
* Skills training: improve ability to obtain reinforcement (e.g., social skills, problem‑solving)
* Environmental restructuring: increase access to positive stimuli

EPPP Cue
* Lewinsohn = depression from low response‑contingent reinforcement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Stress Inoculation (Meichenbaum)

A

Stress Inoculation Training (SIT) is a cognitive‑behavioral intervention designed to help individuals cope with stress and aversive states by strengthening their coping skills.
It prepares clients to handle future stressors much like a medical inoculation prepares the body for a virus.

Three Overlapping Phases
1. Cognitive Preparation (Conceptualization)
* Client learns to understand the nature of stress
* Identifies stress triggers, negative appraisals, and maladaptive coping patterns
* Therapist reframes stress as predictable, manageable, and modifiable
EPPP cue:
Understand the stressor.

  1. Skills Acquisition & Rehearsal
    * Client learns and practices coping skills, such as:
    * Relaxation
    * Cognitive restructuring
    * Self‑instruction
    * Problem‑solving
    * Emotion regulation
    * Skills are rehearsed through imagery, role‑play, and behavioral practice
    EPPP cue:
    Learn and practice coping skills.
  2. Application & Follow‑Through
    * Skills are applied to increasingly stressful situations
    * Often uses in vivo exposure, behavioral rehearsal, and real‑world practice
    * Emphasizes maintenance, relapse prevention, and generalization
    EPPP cue:
    Apply skills under stress.

Why It Works
* Builds cognitive flexibility
* Strengthens coping repertoires
* Reduces avoidance
* Enhances self‑efficacy in managing stress

EPPP Cue
* SIT = understand → learn → apply.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Classical Extinction And Spontaneous Recovery

A

Extinction occurs when a conditioned stimulus (CS) is repeatedly presented without the unconditioned stimulus (US), leading to a gradual weakening and eventual elimination of the conditioned response (CR).
Mechanism
* CS → no US
* CR decreases over repeated trials
* Learning is inhibited, not erased
EPPP cue:
Extinction = CS alone → CR fades.

Spontaneous Recovery

After extinction has occurred, the extinguished CR may reappear when the CS is presented again after a delay, even without new CS–US pairings.
Key Features
* Recovery is temporary
* CR is usually weaker than before extinction
* Indicates that extinction suppresses but does not erase original learning
EPPP cue:
After extinction, CR can return unexpectedly.

High‑Yield Link
* Extinction = new learning (“CS no longer predicts US”)
* Spontaneous recovery = old learning resurfaces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Higher-Order Conditioning

A

Higher‑order conditioning occurs when an established conditioned stimulus (CS) takes on the role of a surrogate unconditioned stimulus (US) to condition a new neutral stimulus (NS).
The new stimulus becomes a CS even though it was never paired with the original US.

How It Works
1. First‑order conditioning:
* CS₁ (e.g., tone) + US (e.g., food) → CR (salivation)
2. Higher‑order conditioning:
* NS₂ (e.g., light) + CS₁ (tone) → CR
* NS₂ becomes CS₂, capable of eliciting the CR on its own
EPPP cue:
CS becomes the “US” for a new CS.

Key Features
* The CR to the higher‑order CS is usually weaker than the CR to the original CS
* Conditioning can extend to second‑order (CS₂) and sometimes third‑order (CS₃), but strength diminishes
* No direct pairing with the original US is required for CS₂

Example
A dog learns that a tone predicts food (CS₁ → CR).
Then a light is paired with the tone.
Eventually, the light alone elicits salivation.
This is higher‑order conditioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Mnemonic Devices
(Method Of Loci, Keyword Method, Acronym, Acrostic)

A

Mnemonic devices are formal memory strategies that enhance encoding and retrieval by using imagery, organization, or verbal structure to make information more memorable.

Imagery‑Based Mnemonics
1. Method of Loci
* Items to be remembered are mentally placed in familiar, pre‑memorized locations (e.g., rooms in your home).
* Recall involves mentally “walking through” the locations and retrieving each item.
* Best for ordered lists and sequences.
EPPP cue:
Walk the path → retrieve the items.

  1. Keyword Method
    * Ideal for paired‑associate learning (e.g., foreign language vocabulary).
    * Create a keyword that sounds like the unfamiliar word, then form a visual image linking the keyword to the target meaning.
    * Strengthens associative links through imagery.
    EPPP cue:
    Keyword + image = paired association.

Verbal Mnemonics
3. Acronyms
* A single word formed from the first letters of items to be remembered.
* Example: “HOMES” for the Great Lakes.
EPPP cue:
Acronym = one word from initials.

  1. Acrostics
    * A phrase or sentence created from the first letters of each item.
    * Example: “Every Good Boy Does Fine” for musical notes.
    EPPP cue:
    Acrostic = phrase built from initials.

EPPP Cue
* Imagery mnemonics = loci + keyword.
* Verbal mnemonics = acronyms + acrostics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Prospective Memory

A

Prospective memory refers to the ability to remember to carry out intended actions in the future, such as remembering an appointment, taking medication, or calling someone back. It is often conceptualized as a component of long‑term memory, because it involves storing an intention and retrieving it at the appropriate moment.
Prospective memory has been described as “remembering to remember” and involves recalling an intention at a later time or when a specific cue appears.

🧠 Two Types of Prospective Memory
1. Event‑Based Prospective Memory
- Triggered by an external cue
- Example: Seeing your colleague reminds you to give them a message
- Generally easier because the environment prompts the action
2. Time‑Based Prospective Memory
- Triggered by internal monitoring of time
- Example: Remembering to take medication at 2 PM
- Typically harder because it relies on self‑initiated retrieval

🧩 Why It Matters
- Essential for daily functioning (appointments, deadlines, medications)
- Failures lead to missed obligations, highlighting its importance
- Involves a network of brain regions including the prefrontal cortex and hippocampus

📝 EPPP Cue
Prospective memory = long‑term memory for future intentions (“remembering to remember”).

22
Q

Matching Law

What are concurrent schedules of reinforcement, and what does the matching law predict about responding under these conditions?

A

Concurrent Schedules of Reinforcement
Concurrent schedules involve two or more reinforcement schedules operating at the same time, each linked to a different response option.
The organism is free to choose between them.
Examples:
* Pressing Lever A (VI‑30) vs. Lever B (VI‑60)
* Choosing between two behaviors, each with its own reinforcement rate
EPPP cue:
Multiple schedules, simultaneous choices.

The Matching Law (Herrnstein)
The matching law states that when multiple response options are available, an organism will allocate its behavior in proportion to the rate of reinforcement obtained from each option.

Formal Statement
B1/B1+B2 = R1/R1+R2
Where:
* B1, B2= rates of responding
* R1, R2= rates of reinforcement

Interpretation
* If Option A provides twice as much reinforcement as Option B, the organism will respond twice as often to Option A.
* Behavior matches reinforcement.
EPPP cue:
Behavior proportion = reinforcement proportion.

Why It Matters
* Predicts choice behavior
* Applies to humans and animals
* Forms the basis for behavioral economics, choice models, and applied behavior analysis

High‑Yield Example
If Lever A delivers 60% of all reinforcement and Lever B delivers 40%, the organism will respond approximately 60% of the time on A and 40% on B.

example:
A pigeon can peck Button A or Button B.
* Button A delivers food on average twice as often as Button B.
* According to the matching law, the pigeon will peck Button A about twice as often as Button B because the response distribution matches the reinforcement distribution.

23
Q

Time-Out

What is time‑out, and how does it function as a form of negative punishment?

A
  • Time‑out is a form of negative punishment in which the individual is removed from all opportunities for reinforcement for a prespecified period following a misbehavior.
  • The goal is to decrease the occurrence of that behavior by withdrawing access to reinforcing people, activities, or environments.

example:
A child hits their sibling during play. The parent calmly removes the child from the play area and has them sit in a quiet, non‑reinforcing spot for two minutes. Because the child temporarily loses access to attention and play (reinforcers), the hitting behavior decreases over time.

24
Q

Rational Emotive Behavior Therapy (REBT)/Ellis

A

REBT proposes that emotions and behaviors are the result of a cognitive chain known as the A‑B‑C model:
* A – Activating Event:
The external situation or trigger.
* B – Belief:
The individual’s interpretation, evaluation, or belief about the event.
* C – Consequence:
The emotional or behavioral response that results from B, not from A directly.
EPPP cue:
It’s not A → C. It’s A → B → C.

Why This Matters
REBT emphasizes that distress is caused by beliefs, not by events themselves.
Changing maladaptive beliefs leads to healthier emotional and behavioral outcomes.

Ellis’s View of Neurosis
According to Ellis (1985), the primary cause of neurosis is the repetition of common irrational beliefs, such as:
* “I must be loved by everyone.”
* “I must be perfect.”
* “Life must be fair.”
* “I can’t stand it when things go wrong.”
These irrational beliefs are rigid, absolute, and illogical, and they become the targets of therapy.

Therapeutic Focus
REBT aims to:
* Identify irrational beliefs
* Dispute them vigorously (D = Disputation)
* Replace them with rational, flexible alternatives
* Promote healthier emotional and behavioral consequences (E = Effective new philosophy)

EPPP Cue
* REBT = change beliefs to change emotions.
* Irrational beliefs → neurosis.
* A‑B‑C (plus D‑E in therapy).

25
Reciprocal Inhibition (Wolpe)
Reciprocal inhibition is a counterconditioning technique developed by Joseph Wolpe in which an anxiety‑evoking conditioned stimulus (CS) is paired with a response incompatible with anxiety—most commonly relaxation. The incompatible response inhibits the anxiety, weakening the conditioned fear reaction. EPPP cue: Pair anxiety cue with relaxation → anxiety inhibited. How It Works * Identify the anxiety‑producing CS * Train the client in a competing, incompatible response (e.g., deep muscle relaxation) * Present the CS while the client engages in the incompatible response * Over repeated pairings, the anxiety response is counterconditioned and diminishes Why It Matters Reciprocal inhibition is the **theoretical foundation for systematic desensitization**, one of the most influential behavioral treatments for phobias and anxiety. Key Features * Based on classical conditioning * Uses incompatible responses (relaxation, assertiveness, sexual arousal depending on the target behavior) * Reduces anxiety by replacing the conditioned fear response * Works through counterconditioning, not extinction (though research suggests extinction plays a major role) EPPP Cue * Wolpe = reciprocal inhibition → basis of systematic desensitization.
26
Information Processing Model (Sensory Memory, STM, LTM)
The information‑processing (multi‑store) model proposes that memory consists of **three separate but interacting stores:** * **Sensory memory (sensory register)** * **Short‑term memory (STM)** * **Long‑term memory (LTM)** **Sensory memeory** can store **a large amount of information**, but the information is retained for **no more than a few seconds**. Information in sensory memory is transferred to **STM** when it becomes the **focus of attention**. **STM** holds a **limited amount of information**, and **without rehearsal**, information begins to **fade within 30 seconds**. Information is likely to be transferred from **STM to LTM** when it is **encoded**, especially when encoding involves **elaborative rehearsal** (relating new information to existing information). The capacity of **LTM** appears to be **unlimited**. ## Footnote example: You glance at a phone number on a billboard (sensory memory). You repeat it to yourself long enough to dial it (STM). If you connect it to something meaningful—like “this is the vet’s number”—you’re more likely to store it long‑term (LTM).
27
Trace Decay Theory ## Footnote According to trace decay theory, what causes forgetting?
Trace decay theory states that forgetting occurs because m**emory traces (engrams) gradually deteriorate over time.** The decay happens as a result of **disuse**, meaning that when information is not actively used or rehearsed, the physical or neurological trace weakens. The theory emphasizes **time‑based fading**, not interference or retrieval failure. ## Footnote example: You once memorized your high‑school locker combination, but years later you can’t recall it because the memory trace faded from lack of use.
28
Operant Extinction and Extinction Bursts ## Footnote What is operant extinction, and what typically happens during the extinction process?
* Operant extinction refers to the **elimination of a previously reinforced response** through the **consistent withholding of reinforcement following that response**. * Extinction is usually accompanied by a **temporary increase in the response**, known as an **extinction burst**. * The organism may respond **more frequently, more intensely, or more variably ** before the behavior begins to decline. ## Footnote example: A child throws tantrums because the parent usually gives candy to stop the behavior. When the parent stops giving candy (withholding reinforcement), the child initially throws even bigger or more frequent tantrums (extinction burst) before the tantrums eventually decrease.
29
Blocking ## Footnote What is blocking in classical conditioning, and how does it occur?
* Blocking occurs when an association has already been established between a conditioned stimulus (CS) and an unconditioned stimulus (US). * Because this association is already learned, the CS blocks the formation of an association between a second neutral stimulus and the US when the CS and the new neutral stimulus are presented together before the US. * The organism does not learn about the second stimulus because the original CS already predicts the US, leaving no new information to acquire. ## Footnote example: A dog learns that a tone (CS) predicts food (US). Later, the tone is presented together with a light before food. Because the dog already knows the tone predicts food, it does not learn that the light also predicts food. The tone blocks conditioning to the light.
30
Escape and Avoidance Conditioning ## Footnote What is the difference between escape conditioning and avoidance conditioning, and how does each work?
* Escape conditioning is an application of negative reinforcement in which the target behavior is an escape behavior—the organism performs the behavior to terminate or escape an already‑present negative reinforcer. * Simple example: A dog jumps over a barrier after receiving a mild shock; the jump ends the shock. The behavior is reinforced because it removes the aversive stimulus. * Avoidance conditioning combines classical conditioning with negative reinforcement. A cue (positive discriminative stimulus) signals that a negative reinforcer is about to occur, allowing the organism to perform the target behavior before the negative reinforcer happens, thereby avoiding it. * Simple example: A tone sounds before a shock is delivered. The dog learns to jump the barrier when the tone occurs, preventing the shock altogether. The behavior is reinforced because it avoids the aversive event.
31
Response Cost ## Footnote What is response cost, and how does it function as a form of negative punishment?
* Response cost is a form of negative punishment in which a reinforcer is removed following a behavior in order to reduce or eliminate that behavior. * The removed reinforcer is often something tangible or quantifiable, such as tokens, points, privileges, or access to a desired activity. ## Footnote example: A student loses 2 tokens from their token‑economy balance each time they call out in class. Because the tokens can be exchanged for prizes, losing them decreases the likelihood of calling out.
32
Learned Helplessness Model/Reformulated Version ## Footnote What are the original, reformulated, and revised versions of the learned helplessness model, and how do they explain depression?
* The learned helplessness model was originally based on the finding that animals exposed to an uncontrollable negative event (e.g., inescapable electric shock) later failed to escape even when escape became possible. * The reformulated version added the role of attributions, proposing that some forms of depression arise when individuals explain negative events using internal, stable, and global causes. * A subsequent revision acknowledged that attributions matter but argued they are important only insofar as they contribute to a sense of hopelessness, which is the immediate precursor to depression. ## Footnote Example: A student repeatedly fails math tests despite studying. * Original model: After many uncontrollable failures, the student stops trying because they believe nothing they do will help. * Reformulated model: The student thinks, “I’m bad at math (internal), I’ll always be bad at math (stable), and this affects everything I do (global).” * Revised model: These beliefs lead the student to feel hopeless about future success, which increases risk for depression.
33
Schedules of Reinforcement (Continuous and Intermittent) ## Footnote What are continuous and intermittent schedules of reinforcement, and how do fixed interval, variable interval, fixed ratio, and variable ratio schedules differ?
* Continuous reinforcement delivers reinforcement after every target response. * Produces rapid acquisition, but behavior is highly susceptible to satiation and extinction. * Intermittent schedules of reinforcement include: * Fixed interval (FI): Reinforcement is provided at predetermined time intervals, as long as the subject makes at least one response. * Variable interval (VI): Reinforcement is provided at varying time intervals, with a predetermined average interval. * Fixed ratio (FR): Reinforcement is delivered after a predetermined number of responses. * Variable ratio (VR): Reinforcement is delivered after a varying number of responses, with the average number predetermined. * Variable ratio schedules produce high, stable response rates and the greatest resistance to extinction. ## Footnote Examples: * Continuous reinforcement: A vending machine gives a snack every time you insert money. * Fixed interval: A paycheck arrives every 2 weeks, regardless of how hard you worked during that interval. * Variable interval: Checking your email—messages arrive unpredictably, but on average every few hours. * Fixed ratio: A factory worker is paid for every 10 items assembled. * Variable ratio: Slot machines—payouts occur after an unpredictable number of plays, but with a set average.
34
Yerkes-Dodson Law ## Footnote What does the Yerkes–Dodson law predict about the relationship between arousal and learning/performance?
* The Yerkes–Dodson law predicts that moderate levels of arousal are associated with optimal learning and performance. * The relationship between arousal and performance takes the form of an inverted‑U curve: * Low arousal → performance is weak due to low motivation or alertness. * Moderate arousal → performance is optimal. * High arousal → performance declines due to stress, anxiety, or overstimulation. ## Footnote example: A student taking an exam performs best when they feel alert and focused, but not overly anxious. Too little arousal leads to sluggishness; too much leads to panic and mistakes.
35
Self-Instructional Training ## Footnote What is self‑instructional training, and what was it originally developed to address?
* Self‑instructional training is a cognitive‑behavioral technique in which individuals learn to modify maladaptive thoughts and behaviors through the use of covert self‑statements. * It was originally developed to help impulsive and hyperactive children slow down their behavior and verbally guide themselves through academic and other tasks. * The technique teaches children to use internal dialogue to plan, monitor, and evaluate their actions. ## Footnote example: A child who rushes through math problems learns to silently tell themselves, “Slow down… read the question… think before writing… check your answer.” These self‑statements help regulate impulsive responding and improve task performance.
36
Serial Position Effect ## Footnote What is the serial position effect, and how do the primacy and recency effects explain patterns of recall?
* Research on the serial position effect shows that when people are asked to recall a list of unrelated items immediately after reading it, items at the beginning and end of the list are recalled much better than items in the middle. * The primacy effect occurs because items at the beginning of the list have been rehearsed and transferred into long‑term memory. * The recency effect occurs because items at the end of the list are still in short‑term memory at the time of recall. ## Footnote example: If you read a grocery list of 12 items, you’re most likely to remember the first few (e.g., milk, eggs) and the last few (e.g., apples, cereal), but you may forget the items in the middle (e.g., pasta, beans).
37
Punishment/Habituation ## Footnote What is punishment in operant conditioning, what are its limitations, and how does habituation affect its effectiveness?
* Punishment occurs when the application or withdrawal of a stimulus following a behavior decreases the occurrence of that behavior. * A major disadvantage is that punishment often suppresses a behavior rather than eliminating it. * Punishment is usually most effective when it is initially applied in moderation. * If punishment is first administered in a weak form and then gradually intensified, the organism is more likely to develop habituation, meaning the punishment loses its effectiveness over time. ## Footnote example: A teenager receives mild scolding for coming home late. Over time, the scolding intensifies, but because it started weak and increased gradually, the teen becomes used to it (habituation), and the punishment no longer reduces the late‑coming behavior.
38
In Vivo Aversion Therapy/Covert Sensitization ## Footnote What is in vivo aversion therapy, how does it use counterconditioning, and how does it differ from covert sensitization?
* In vivo aversion therapy uses counterconditioning to reduce the attractiveness of a stimulus or behavior by pairing it in real life with a stimulus that produces an undesirable or unpleasant response. * A classic example is pairing alcohol consumption (CS) with an electric shock (US) so that fear or discomfort (UR/CR) becomes associated with alcohol. * Covert sensitization is similar, but the CS and US are presented in imagination rather than in real‑life situations. ## Footnote example: * In vivo: A person takes a sip of alcohol and immediately receives a mild electric shock, creating a real‑world pairing between drinking and discomfort. * Covert sensitization: The person imagines taking a sip of alcohol and then vividly imagines becoming nauseated or ill, creating the aversive pairing mentally.
39
Operant Conditioning/Skinner (Reinforcement and Punishment) ## Footnote According to Skinner, how do consequences shape complex behavior, and what is the difference between positive/negative reinforcement and punishment?
* According to Skinner, most complex behaviors are voluntarily emitted or not emitted based on how they operate on the environment—that is, based on the consequences that follow them. * Skinner distinguished between two types of consequences: * Reinforcement: increases the likelihood a behavior will recur. * Punishment: decreases the likelihood a behavior will recur. * He also distinguished between positive and negative forms of reinforcement and punishment: * Positive: a stimulus is applied following a behavior. * Negative: a stimulus is withdrawn or terminated following a behavior. ## Footnote examples: * Positive reinforcement: A child cleans their room and receives praise, increasing room‑cleaning behavior. * Negative reinforcement: A driver fastens their seatbelt to stop the car’s beeping sound; removing the aversive noise increases seatbelt use. * Positive punishment: A child touches a hot stove and feels pain, decreasing stove‑touching behavior. * Negative punishment: A teen breaks curfew and loses phone privileges, decreasing curfew violations.
40
State Dependent Learning ## Footnote What is state‑dependent learning, and how does emotional state influence recall?
* Research on state‑dependent learning shows that recall of information is better when the learner is in the same emotional state during learning and recall. * Matching internal states (e.g., mood, arousal) facilitates retrieval because the emotional state becomes part of the contextual cues encoded with the material. ## Footnote example: If a person studies while feeling calm and relaxed, they are more likely to remember the material later if they are also calm and relaxed during the test.
41
Cognitive Therapy (Beck) ## Footnote According to Beck, what cognitive factors contribute to depression and other psychopathology, and why is CT described as “collaborative empiricism”?
Beck’s cognitive therapy (CT) attributes depression and other psychopathology to several cognitive phenomena: * Dysfunctional cognitive schemas: deep, underlying cognitive structures that shape how individuals interpret experiences. * Automatic thoughts: rapid, surface‑level cognitions that arise in response to situations. * Cognitive distortions: systematic errors in information processing (e.g., catastrophizing, overgeneralization). * CT is described as “collaborative empiricism” because therapist and client work together to examine evidence for and against the client’s beliefs. * Cognitive therapists frequently use Socratic dialogue—guided questioning—to help clients reach more logical, balanced conclusions about their problems and their consequences. ## Footnote example: A client thinks, “I made one mistake at work; I’m a total failure.” Through Socratic questioning, the therapist asks, “What evidence supports that? What evidence contradicts it?” Together, they discover the belief is distorted, helping the client replace it with a more accurate thought such as, “I made a mistake, but I generally perform well.”
42
Prompts/Fading ## Footnote What are prompts in behavior modification, and what is fading?
Prompts are verbal or physical cues that help facilitate the acquisition of a new behavior. * The gradual removal of a prompt is called fading, which ensures the behavior eventually occurs independently of the prompt. * The term fading is also used to describe a procedure for eliminating an inappropriate stimulus–response connection by gradually replacing the inappropriate stimulus with appropriate stimuli so the response becomes associated with the new, correct stimulus. ## Footnote example: A teacher physically guides a child’s hand to form letters (physical prompt). Over time, the teacher reduces the guidance to a light touch, then to a verbal cue, and eventually removes all prompts. The child learns to write independently.
43
Observational Learning (Guided Participation, Self-Efficacy) ## Footnote What does Bandura’s observational learning theory propose, what makes participant modeling especially effective, and how do self‑efficacy beliefs influence motivation?
* Bandura’s observational learning theory proposes that behaviors can be acquired simply by observing a model perform them. * Observational learning is cognitively mediated and involves four processes: * Attention – noticing the model’s behavior * Retention – remembering what was observed * Production – being able to reproduce the behavior * Motivation – having a reason to perform the behavior * Research shows that participant modeling, which combines modeling with guided participation, is the most effective observational learning technique, especially for phobic reactions. * Bandura also emphasized self‑efficacy beliefs—beliefs about one’s ability to perform a behavior or achieve a goal—as a primary source of motivation. ## Footnote example: A person with a dog phobia watches a therapist calmly pet a dog (modeling). Then, with the therapist’s support, they gradually approach and pet the dog themselves (participant modeling). As they succeed, their self‑efficacy increases, making further approach behaviors more likely.
44
Shaping Vs. Chaining ## Footnote How do shaping and chaining differ in the acquisition of complex voluntary behaviors?
Shaping (successive approximation training) teaches a new behavior by prompting and reinforcing behaviors that increasingly resemble the target behavior. * Only the final behavior matters; intermediate steps are reinforced only as approximations. Chaining establishes a sequence of responses (a “behavior chain”), each serving as a cue for the next. * The entire sequence is important, not just the final behavior. * Can be taught using forward, backward, or total‑task chaining. ## Footnote examples: * Shaping: Teaching a child to speak more clearly by reinforcing closer and closer approximations of a target word. * Chaining: Teaching a child to tie their shoes by breaking the task into steps (cross laces, make a loop, wrap around, pull through) and reinforcing the sequence.
45
Classical Conditioning ## Footnote How does classical conditioning work, and what did Pavlov’s original studies demonstrate?
* In classical conditioning, a neutral (conditioned) stimulus is repeatedly paired with an unconditioned stimulus (US) so that the neutral stimulus eventually elicits the response naturally produced by the US. * In Pavlov’s original studies: * Meat powder was the unconditioned stimulus (US). * Salivation to the meat powder was the unconditioned response (UR). * A tone served as the conditioned stimulus (CS). * After repeated pairings of tone + meat powder, the tone alone elicited salivation, now a conditioned response (CR). ## Footnote example: A dog hears a bell (neutral stimulus) right before receiving food (US). After enough pairings, the bell alone makes the dog salivate (CR), even without food.
46
In Vivo Exposure with Response Prevention ## Footnote What is in vivo exposure with response prevention, and how does it differ from flooding?
* In vivo exposure with response prevention (ERP) is a classical extinction technique in which the individual is exposed in real life to anxiety‑arousing stimuli (the CS) without the original US, while being prevented from making their usual avoidance or escape response. * By blocking avoidance, the conditioned fear response gradually extinguishes. * Flooding is a form of exposure that involves presenting the individual with the most anxiety‑arousing stimuli for an extended, uninterrupted period, allowing anxiety to peak and naturally decline. ## Footnote example: * ERP: A person with contamination fears touches a doorknob (CS) and is prevented from washing their hands (avoidance response). Over time, anxiety decreases because no illness (US) occurs. * Flooding: The same person is asked to touch multiple “contaminated” surfaces and sit with the anxiety for a prolonged period without escape.
47
Latent Learning (Tolman) ## Footnote What does Tolman’s model of latent learning propose, and what did his research demonstrate?
Tolman’s model of latent learning proposes that learning can occur without reinforcement and may not be immediately reflected in performance. * His research showed that rats formed “cognitive maps” of mazes even when they were not reinforced for exploring them. * When reinforcement was later introduced, these rats suddenly performed as if they already knew the maze layout—revealing that learning had occurred earlier but was not expressed. ## Footnote example: A child rides in the car to school every day without paying much attention. One day, when they need to walk, they can navigate the route easily—even though they were never “taught” or rewarded for learning it. They had formed a cognitive map through passive exposure.
48
Stimulus Generalization ## Footnote What is stimulus generalization, and how does it differ in classical and operant conditioning?
* In both operant and classical conditioning, stimulus generalization refers to responding with a particular response to similar stimuli. * Classical conditioning: The organism responds to stimuli similar to the conditioned stimulus (CS) with the conditioned response (CR). * Operant conditioning: The organism emits the target behavior in the presence of stimuli similar to the discriminative stimulus (SD). ## Footnote examples: * Classical: A dog conditioned to salivate to a 500‑Hz tone also salivates to a 520‑Hz tone. * Operant: A child reinforced for saying “please” to their teacher also says “please” to other adults with similar authority cues.
49
Positive Reinforcement (Thinning, Satiation) ## Footnote How does positive reinforcement work, what schedules best support acquisition vs. maintenance, and what are thinning and satiation?
* Positive reinforcement occurs when the application of a stimulus following a behavior increases the likelihood of that behavior recurring. * Continuous reinforcement produces the fastest acquisition of a new behavior. * Intermittent reinforcement produces the greatest resistance to extinction, making it ideal for maintenance. * The optimal procedure is to start with continuous reinforcement and then shift to an intermittent schedule once the behavior is well‑established. * Thinning refers to gradually reducing the proportion or frequency of reinforcements. * Up to a point, more reinforcement increases effectiveness, but beyond that, satiation may occur—meaning the reinforcer loses its reinforcing value. ## Footnote example: A teacher gives a sticker every time a student completes a math problem (continuous reinforcement). Once the behavior is reliable, the teacher gives stickers only sometimes (intermittent schedule). Over time, the teacher reduces how often stickers are given (thinning). If the student receives too many stickers too quickly, they may stop caring about them (satiation).
50
Differential Reinforcement ## Footnote What is differential reinforcement, and how do procedures like DRA, DRO, and DRI work?
* Differential reinforcement is an operant technique that combines positive reinforcement and extinction. * During a specified period, the individual is reinforced for engaging in behaviors other than the target (undesired) behavior, while the target behavior is not reinforced (extinction). * Common forms include: * DRA (Differential Reinforcement of Alternative Behavior): Reinforce a specific, acceptable alternative behavior. * DRI (Differential Reinforcement of Incompatible Behavior): Reinforce a behavior that cannot occur at the same time as the target behavior. * DRO (Differential Reinforcement of Other Behavior): Reinforce the absence of the target behavior during a set interval. ## Footnote examples: * DRA: Reinforcing a child for asking for attention appropriately instead of whining. * DRI: Reinforcing a child for keeping their hands in their pockets to reduce hitting (the two behaviors cannot co‑occur). * DRO: Reinforcing a student for every 5‑minute interval in which they do not call out in class.
51
Premack Principle ## Footnote What is the Premack Principle, and how does it function as a form of positive reinforcement?
* The Premack Principle is an application of positive reinforcement in which a high‑frequency behavior (something the individual naturally prefers or does often) is used as a reinforcer for a low‑frequency behavior (a less preferred behavior). * The opportunity to engage in the preferred activity increases the likelihood that the less preferred behavior will occur. ## Footnote example: A child who loves playing video games (high‑frequency behavior) is allowed to play for 20 minutes only after finishing their homework (low‑frequency behavior). The preferred activity reinforces the less preferred one.