TEST 3 Flashcards

(79 cards)

1
Q

Extinction is __________, acquisition isnt

A

context specific

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

basic explanation of extinction

A

when we present CS- (cs aloe) after acquisition training, we should see decline in responding
- responding fist increases during acquisition
-responding starts high then declines in extinction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

does the extinction of responding in extinction signal that the original acquisition/association is unlearnt, or is new learning masking the retrieval of that memory/association?

A

it is not unlearning; its new learning that makes memory of association

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

can extinction lead to different emotional repsonses?

A
  • yes, fear conditioning it leads to relief, bc aversive US doesnt happen
  • and no, in excitatory cond leads to anger/furusration at not getting rewarding US
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Excitation procedure reduces———— and increases ————

A

conditioned repsosning to near baseline; increases behavioural variability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

applied significance of extinction

A

in clinical settings; phobias, addiction, ptsd, depression
Each can have stim that serve as triggers for maladaptive behaviours
So clinical uses of exticntion in order to attempt to try and understand how triggering stim can lose triggering effects
Target of therapy is IDing enviro events that lead to maladaptive, unwanted behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

3 signature phenomena of extinction

A

spontaneous recovery
renewal
reinstatement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

extinction: sponteanous recovery

A

Acquisition then extinction; then wait for passage of time; after time, return in conditoned reposnse
- shows Extinction doesn’t completely eradicate original learning
Animals have retained both acquisition and extinction memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

extinction; sponteansou recovery(study)

A

by rescorla
rats trained to magazine appears; CS-US pairings
- S1 group presented with CS- immediately after acquisition
- S2 present with CS- after 7 days
- the diff between S1 and S2 in reposting reflects spontaneous recovery; after passage of time, S1 responding recovered, while s2 didn’t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

extinction renewal effect

A

return in conditioend response after change in context
context
basic one is called ABA renewal, also ABC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

extinction; renewal (ABA)

A

context A; acquisition of CS-US assoc; pairing context
context B: extinction, CS-
contra A v B; test CS- in context A and B and see if CR returns in either
results is animals respond more in context A than B bc A serves as an occasion setter for acquisition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Boution ABA renewal study

A

context a; aquisitoon, rats learn how to press lever for food
context b; exctiniton
both an instrumental assoc b/n lever press and food ad pavlovian assoc b/n context a and food
CR will return if animal is tested in the acquisition context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

renewal ABC design and study

A

Context a; CS-US pairing/acquisition
Cntext b; CS-; extinction
Context C; CS- test in novel context
westbrook harris study; same group=ABB
diff group= ABC
diff between group same and diff rep ABC renewal effect; ansials show more CR in C than in B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

reinstatement effect

A

iordanova study; 4 groups(PR, PN, UR, UN)
- during aquisiton phase, PR PN get CS+, UR UN get CS/+
- extinction; all animals put on context A, get CS-; extinction for the paired animals
- reinstatement; all put in context A; PR and UR (reinstate grps) get US-, UN PN get nothing
- in test, CS- presented; PR animals, after exposure to just paired US alone, leads to returnof CR to the CS
Unsignalled exmposure to US in reinstatement (US alone) leads to return in CR to the CS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

factors that effect extinction

A
  • number and spacing of excitation trials (more trials, more widely spaced=better exctintion)
  • repetition of excitation/test cycles
  • conducting excitation in multiple contexts
  • reminder cues; bring back excitation memory
  • compounding extinction stimuli
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

factors affecting exciting; compounding extinction stimuli

A

study by rescorla
During asqusiiton 3 diff stim L, X, and Y (all reinforced separate)
Phase 2; element extinction; each stimulus exposed to CS-
Phase 3; compound extinction phase; Y exposed on its own to CS-, LX exposed as compound to CS-
test phase; test X and Y and comp their CR; responding to X is lower than t Y; stim that undergoes CS- as compound gets deeper extinction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

extcitnion; paradoxical effecst

A

magnitude of reward affects extinction
partial reinforcement extinction effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

why does partial reinforcement extinction effect (pree) occur (2 explanations)

A
  • Frustration theory *(amsel); The animal frustrated during partial reinforcement in acquisition; maintained in extinction, prevents extinction form occurring
  • Sequential theory (capaldi); If prior trial was nonreifnroced, its memory serves as a cue for refinrocement on next trials
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

SS and SR ascoitoans

A

SS; b/n 2 stim, first order cond
SR; b/n stima nd response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

2nd order cond

A

present light and shock (CS+)
then light and tone
both SS and SR could form
extinguish light and tone i
if its SS, exintgitoning light will lead to little fear tot he tone
if tone becomes associated with CR, then extinguishing light won’t change anything
so if data stays same, its SR not SS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

pavlov 1st order cond; US devaluation

A

in exp grp; after food then pairings, rats get food then induced sickness
control group rest more to tone than the exp group, bc its a SS association not SR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

pavlov 2nd order cond; US devaluation

A

first light paired with food, then tone paired with light, then exp group is given food and induced sick, cont gap given just sickness
tested with just tone; rest to tone in 2nd order cond bc its SR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

multiple associations ; borbit and balleine study (pavlov to instrumental transfer)

A

S1 and s2 and s3 paired with different outcome s; s1 and outcome 1, etc
then transfer test; 2 levers, and either S1, S2, or S3 would come on; animals more likely to press lever associated with outcome paired to stim (ie press lever for O2 if S2)
more responding to same/congruent (ie rest more on R2 when S2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

testing for pail;ocv to instrumental transfer; general/effective v sensory/specific properties

A

Then they tested this with lesions to BLA and CN
- when BLA disrupted, genealogy remained fien, but sensory was impacted; so BLA integral for sensory/specific
- when CN disrupted, sensory/specifbc is fine, but general is impacted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
binary v heiirarchichsl assocs
binary; between 2 events or stim heriarhcichall; stim makes you retrieve anpther CS-US assoc
26
holland study on heriarchcihal assoc
grp simeaultenaosu; LT+, then L-; when TL paired, head jerk resp; this is binary assoc grp serial; T- L - +; tone follwed by L=head raring; this is hierarchical assoc, where T is occasion setter for retrieving association of L+
27
to distinguish between/n occasion setting (heirarchchal) and binary, 2 kinds of experiments
feature pos occasion setting and feature neg occasions setting
28
Feature positive occasion setting
to prove that L is occasion setting, we hve to prove it not just forming binary assoc with the us - trials of T-L-Food - then L-no food - test if this is binary or heriearchichal using exctintio test; is T was binary executor, then combined CR to T and L would decline; but in occasion setting, extinguishing T dent reduce its effectiveness in facilitating response to target stimulus, bc T signals L-food, not T-fodo
29
feature neg occasion setting
Feature L sets the stage for tone to be paired with absensc eof us - L is paired with US - T is tpresented before target, and no US follows (T-L- no US) - have to distinguish from Cond inhibitor binary assoc - test by changing T form inhibitor to excitor; then present T-L; if occasions eating, no change, but if binary, same responding or more
30
what kind of contingency in instrumental
R-R; response-reinforcer contingency ; between action snd reinforcement we get
31
instrumental behaviour
Behaviour tats effective in producing a particular consequence or reinforce; ie unless you press lever, no food is ocming
32
2 basic paradigms to get instrumental cond
discrete trial method free operant method
33
instrument; discrete trials method
by thorndike primairily includes puzzle box, runway maze, T-maze, 8 arm radial maze, plus maze hard work for experimenter, less repriitoon of animal initiating response trials
34
thorndike's law of effect
an asociatuon between a stim and response is stentghened if response is followed by a satisfying event The outcome is not part of the association an association is formed between the stimulus S present at time of response R. the reifnorcer delivered after R stentghens or stamps in this S-R association
35
instrument; free operant
by skinner put animal into skinner box; auto data collection, trial restart; repetition of response w/out constraint
36
how do we shape instrumental responses
Behvauoru is shaped or approximated to by reifnrocing progressive series of response requirements
37
free-operant; Magazine training
Ie training rat to approac h lever; then to sniff, then to tough, then to press to evenrtuall get reward
38
free operant; pos reinforce
pos contingency between behavior and outcome ; you press smt and you get reward;’ leads to increase in response; more elicited behaviout/instrcumental action
39
free-operant; punish
Instrumental response produced an aversive stim Only punisher if it ifnluences instrumental action Pos contingency between behavuour and outcome is neg, so your going to void it and reduce behaveuou
40
free operant; negative reinforce
Instrumental response prevents aversive outcome Neg contingency between behavior and outcome; more you perform action, the more youre gonna avoid bad outcome
41
neg reinforce v punish
Punish produces aversive outcome, leads to decrease in behavior Neg reinforcement leads to omission of aversive outcome; increase in behavior
42
neg punish/omission training
Instrumental response produces omission of appetitive/ good stimulus Neg contingency between behaviour and outcome Decrease in behavior If you do a particular action, were gfonna take away an appetitive (good) outcome
43
instrumental response factors
nature of response nature of reinforcer nature of repsonse reinforcer relation/contiguity response reinforcer contingency
44
instrument; nature of response
belongingness, similar to garcia (ie taste CS is more easily associated with with sickness US)
45
instrumental response rococo study
breland and breland Asked to deposit coins in piggy bank, but had ahrd time; they dhol the coin and trat liek food When coin is assoc with something appetitive, being asked to lt go ot it is hars Instinctive drift; reflecst construction of pavlovian CRs that can compete with learning the instrumental task
46
instrumental nature of reinforced; reward magnitude
Behavioural systems theory is still important Larger the reward the better the reinforcing effect of that reward Diff response requirements Gonna work harder to earn bigger reward
47
instrumental nature of reinforced; perceived level of rewards
Comparison between what you expect and what you actually get Behavioural contrast effects Small reward is seen as less valuable when animal has prior exp with a large reward If you expected soemthing bigger than what you got, its gonna be disappointing and not support instrumental action
48
instrumental nature of Response-reinforcer relationship; contiguity
Temporal contiguity dickinson Rats are gonna press lever for food; lever press more when food given sooner after the press/repsonse Shorter contiguity means quicker/better instrumental response
49
instrumental nature of Response-reinforcer relationship; reward surprise
Williams reward surprise Perform the action, lead sto light that marks that action; action also leads to sensory stim and subsequent reward Marking is like I performed an action and this happened, this marks the action Strengthens trace of instrumental response that then became reinforced
50
instrumental rwpsosne refinrocer contingency
contingency or temporal contiguity most important? grp1; press lever for food pellets grp 2; pull chain for sucrose grp3; food pellets freely delivered in ITI - if temporal contiguity/contingency is only thing that matters, both responses should be equal - but chain pulls occur more than level press for pellets; bc not as strong lever press contingency bc pellets also provided outside of this
51
instrumental schedules of reinforcement
ratio schedules (dep on how many responses made - fixed - variable interval schedules(how long you have to wait to perform action and get reinforce) - fixed -variable
52
how are effects of reinforcement o behaviour measured
cululatiev record teaks us how animal is being reifnoecved slope is rate of responding
53
what does ratio schedule of reinforcement look like on a chart
ratio run of increase on y axis, then reinforcement delivered at red point, then they take a pause, braiding themselves knowing how many responses they have to make
54
instrument ratio reinforcement; continuous reinforcement
this is when reifnroer delivered after every response so FR1(fixed ratio of 1)
55
instrument ratio reinforcement; fixed ratio schedule
Have to respond a set number of times to get reinforce Always fixed, predetermined This is partial (not being reinforced all the time) but its fixed bc at every 10 responses you get reinforced Ie deliver 10 newspaper ad get paid
56
instrument ratio reinforcement; variable ratio schedule
Reifnrocer is delivered after variable number of responses; ie response number isnt fixed In fixed, exactly after 10 its reinforced In variable, its gonna be sometimes 8, soemtie s9, sometimes 10 On average its after 10 repsosnes, leads to more steady level of responding
57
VR (variabel ratio) v FR (fixed ratio)
Overall rate of responding is similar, but wih a diff distribution pattern Pause-run distribution pattern for FR, but steady pattern for VR bc dont know when youll be reinforced as reliably
58
instrument, interval reinforcement; fixed interval schedule
For response to be reinforced specific amount of time must have passed after previous reinforcement They have to respond after the time has elapsed ( ie cant respond between 1 min -4:59; has to be after the 5 min point) - Bc its timing they know no point in responding right after reinforcers; ramp up responding once they feel theyre approaching time of reinforcement
59
instrument, interval reinforcement; fixed interval scalliop pattern
As time to end of interval approaches, the subject will increase rate of responding; trying diff responses, hoping theyll get reinforced As subject feels like appraosching time, they respond more bc cant guess exact timing
60
instrument, interval reinforcement; variable interval schedule
response reinforced after variable intevral of time has passed since prior reinforcement Variable has an average with each trial this interval varies When reinforcement will be delivered is less predictable, leading to more steady responding
61
reynolds study on ratio vs interval schedules
VR and VI schedules of key pecking for reinforce every time VR group gets reinforce, VI also gets one (so long as VI completes 1 response in that time) hiding reinforcement the same, he compares responding in both groups; proves that the amount of reward isn't only factor affecting responding rate
62
simple schedules of reinforce; what does this look like
ratio just keeps goring in response, while interval goes up then plateaus this is bc In ratio the greater the response the greater the reinforcement
63
roel of feedback/reifnorcement functions (interval v ratio)
Ratio: greater the responding, the greater the reifnofcers earned; responding limits rewards/reinforcers interval: responding does not determine amount of rewards, interval does
64
role of short/long IRT (inter response time)
IRT=interval between successive responses Short IRT= - Response is reinforced soon after preceding one - Reinforced in ratio schedules - High rate of responding Long IRT: - Response is reinforced a while after preceding one - Reinforced in interval schedules - Low rate of responding
65
instrumental; concurrent schedule of reinforcement
2 schedules of reinforcement are in effect at the same time and the subject is free to switch from one response key to the other Allows for continuous measurement of choice, because organism is free to switch from from one option to the other at any time look at how actions are distributed across multiple(or the 2) options
66
instrumental; is behaviour always reinforced
No, it isn’t; it can be partially or intermittently reinforced. Performing more than once to get the reinforcement (studying for a test multiple times to get a good grade on a test).
67
why are contiguity ad contingency important factors affecting reinforcer-response relationship
Contingency is important because reinforcer delivery depend on instrumental response Contiguity is important because it eliminates the likelihood of associating other, intruding responses with the reinforcer
68
instrument. concurrent reofnrocement; how do we calculate the rate of reinforce and of repsosnding on one option/key
calculate rate of responding on left key: BL / BL + BR calculate rate of reinforce on left key: RL / RL + RR if BL=BR then ratio is .5 if BL>BR then ratio> .5 if BL
69
concurrent reinforcement pigeon study
VI-VI schedule; VI6 and VI2 pigeons maximize time to get most rewards by splitting time across both keys this confirms matching law
70
matching law; concurrent reinforce
relative rate of reinforcement matches relative rate of responding; diagonal similar line of growth
71
3 ways in which deviations from matching law can occur
response bias under matching overmatching
72
deviations from matching law: response bias
if one option is more pleasing than another, you spend more time on it bc of preference consistently get more responding on one side
73
deviations from matching law: under matching
When animal responds less than predicted on advantageous schedule Reduced sensitivity of choice behavior to rate of reinforce ie if More reifnroce on left than on right; how are they responding to left more than right; were seeing less responses than we usually should; ienlook at .7 on x axis, should align with .7 on y; but its less so we know theyre not responding in way that matches
74
deviations from matching law: overmatching
Higher rate of responding for better of the 2 schedules than matching law predicts Responding more than is expected on the advantageous or more reinforced side
75
matching law limits
Focus on average rates of responding over the session or time period Does not account for when individual responses are made
76
alternative theories to compensate for matching law inaccuracies (3)
molecular/momentary theories molar theory melioration theory
77
molecular/momentary theories of rate of responding and reinforcement
Driving mechanism is determined by decision of which alternative has highest probability of reinforcement in that moment Tracking moemnetary changes in local probabilities of reinforcement for each alternative response Probability of reinforcement for behavior immediability following reinforcement of that behavior is low
78
melioration theory of rate of responding and reinforcement
79
molar theory of rate of responding and reinforcement
looks at aggregates and ignore individual responses Distribution of responses to maximize reinforcement over entire session maximizing rewards over the long run better for ratio, concurrent ratio schedules bad for comparing interval schedules