Representation
How we represent world so can make decisions about it
Sensory representations
Uncertainty estimation
Bayesian integration
valuation
value representations in pfc
Availability/desirability
Assign value, encode diff aspects of value
action selection
Striatum - bias process
Action selection in striatum
outcome evaluation
Confidence/metacognition
learning
Key part, feedback changes preferences
If incorrect = something wrong = estimate how much you should update Your beliefs or represention, or valuation or action selection to get better
mice adapt to changes in reward contingencies
Adaptation in learning
Use reward history to assess whether urewarded trials is bc its unrewarded or bc block change
Invert contingencies of task
Lesions - like gambling task, impairment in ability, in leanring to adapt to environment
what are the ingredients to build an intelligent agent
Computational approaches = control everything coming in and out fo system and lean rain g process
Need all various competent to learn/be flexible in envir
What part of leanring process is important to learn a certain behaviour
computational approach to cognition
agent interact with world
Goal = Maximzie reward
Aspects of environment are represented in diff brain areas
How do observations, rewards, actions change internal model of an agent
What are diff types of leanring in situations where fed observations vs when really interacting with world and receiving reward
what is evidence that a behaviour is learned
Action, conditioning, habit
Learning of new Jamir
Improving accuracy = signature fo leanring
Is behaviour innate or learned - motor control
nature vs nurture
Giraffe baby can walk immediately
Human baby can’t - learns over time
Complexity of a behaviour is not an indication of leanring vs innate behaviour - lots of structure exists inn the works that can enable complex innate behaviours
implicit vs explicit
Learning can be =
Explicit = taught, like read book about physics
Implicit = Jenga, acquire understanding at some point, instinctive - intuitive understanding of physics - gravity
motor skill learning and memory
Recall = HM
Acquired specific motor skill but doesn’t remember = improved over days, based on implicit leanring and doesn’t transfer to other hand
learning to see and to navigate
Across many domains = many brain processes involved but we cna think about some key concepts that are needed for learning
Is leanring in a given cortex domain general or domain specific (sensory/motor leanring)
learning to locate a place
First trial = rat in pool swimming
If swimming = doesn’t see hidden platform so has to explore space and encounter platform
After 10 trials =built map of space and can travel directly to platform = learn where it is
= learn cognitive map fo space
learning to discriminate
Have to compare - butterflies
If do task over amd over = will get better
what is an essential competent of learning
Memeory
Repetition- test leanring
feed back = internal or external
Adaptation
name diff types of tasks
2 main goals = classification or regression
Do them everyday
describe classification
Each data point has label
Goal of algorithm = infer label of new data points
Is it a or b
Will it snow tmr
describe regression
Each data point has a value
Goal of algorithm = infer value of new data points
More continuous prediction of value
What will temperature be at 12pm tmr
types of learning algorithms
Diff types of leanring styles depending on feedback
Distinguished by how data is handled and what signal is for leanring - how is info you have about world processed
How cna they help us understand leanring in the brain - what is type of feedback
What type of learning in diff brain areas at diff stages
name the types of learning algorithms
Supervised learning
Unsupervised learning
Reinforcement learning
describe learning in artificial networks
Control data, learning rules
Can observe activity of all the units in the network
Can define task
= build everything, compete control of leanring process and capabilities of system
- can help us think about brain
describe learning in biological networks
Have some control over data
Don’t control leanring rules
Can observe activity of a very small fraction of units in network - some of neural activity only
Have some control over task
Way harder
architecture of convolutional neural network
Simple feature map —> object representations
Hierarchical representation
Train on data and learn