structured models of similarity
knowledge is coherent, connected, interconnection of meanings
We don’t just represent “flat feature lists”
– E.g., “blue” “big” “communist”
Representations are structured. Objects are bound by relations among them. We represent the roles objects play in relations.
We find similarity in the relations among things, and the roles things play, not just in their features.
compositional knowledge
attribute: when one argument is bound to a predicate
e. g. big (sun)
lower-order relation: when predicate binds multiple arguments roles
e.g. bigger (sun, planets)
higher-order relation: when predicate has lower-order relations as its arguments
e.g. CAUSE [bigger (sun, planets), revolves around (plantnet, sun)]
Markman & Gentner (2000) study of structured models of similarity
In addition to circle and
triangle, there are also global configural features “triangle- is-above-circle”
5 & 6. maintaining multiple relations even with shape changes AND [above (shape1,shape2), [bigger(shape1,shape2)
take away: more rational to have a set of relationships and a set of elements to compose
unstructured model breaks down in real world
Alignable differences
What differences are easier to list: hotel & motel vs. hotel & coconut
Markman & Gentner (1994) show that the more similar two things are, the more differences listed
– Because of shared representational elements, distinct aspects stand out
summary: 3 approaches to similarity
Unstructured 1: Spatial- aims to capture global organization of either specific semantic domains, or word meanings generally
– By maintaining similarity relationships across concepts
– Useful computationally for capturing large amounts of data with relatively simple math
Unstructured 2: Feature-based- Tversky (1972) argues similarity comparisons violate geometric/spatial properties, that thinking about common and distinct feature sets better explains similarity data
Structured/relational model: Compositional representations where relations and objects are independent representational elements. Can be recombined. Similarity is based on sharing both
– Similarity judgments themselves highlight common relations
– Alignable differences
the theory theory
Murphy & Medin (1985): similarity in the features is not the only basis for categorization, and is not what often drives the sense of coherence
– We don’t just process novel stimuli in terms of their features, but how they relate to our prior knowledge
Prior knowledge can make categories coherent, and easier to learn, we have an underlying theory that makes category coherent
important knowledge is often causal, features causally related to each other
Individual categories are coherent, but theories also help to organize domains of knowledge with broad causal principles
conceptual change/theory change in children
Conceptual knowledge is theoretical from early on
– Carey (1985); though see work of Fisher and Sloutsky for arguments that children just use features for categorization
For example, children think life = motion/animism
– River is alive, a tree is not
– Statues are dead, same as inanimate
Children must do more than just learn new biological categories, but restructure their overarching theories of what life is to learn true biology
Must see causal similarities between plants and animals using energy, expelling waste, producing offspring.
Johnson & Carey (1998) showed William Syndrome adults knew many biological facts, but still had child-like conception of life/nature
– e.g., can describe that mouths/lungs breathe, but don’t understand it’s role in supporting life.
Typical western children gradually change their biological theories from 6-12 years of age.
Accruing new facts is easy. Conceptual change/theory change is harder.
causal theory vs unstructured features: DSM-IV vs clinical psychologists
DSM-IV: use unstructured features (symptoms), don’t use causal theories!
Kim & Ahn wanted to test whether clinicians still reasoned with causal theories, given their psychological pervasiveness
step 1: clinicians draw causal diagrams of symptoms
step 2: diagnose hypothetical patient descriptions with varying symptoms
step 3: after delay, recall of hypothetical patient symptoms, central features remembered better
The “theory theory”: Summary
• Theoretical or causal knowledge among features coheres concepts
– Causally central features seen as more important
• Theoretical knowledge organizes domains
– Child development sees “conceptual change” or theory revision which is a re organization of conceptual knowledge that goes beyond learning new facts or individual concepts
• Despite the explicit goal and organization of DSM IV against causal theories, clinicians diagnose and remember patients inline with their own causal theories
ad hoc and goal-derived categories
• Goals can make categories coherent, and even lead to constructing categories on the fly.
e.g. things to take out of your house in case of fire
• One way goal-derived and ad hoc categories are different
from feature-based categories is the importance of ideals
e.g. 0 calorie & delicious is the ideal diet food. No real exemplar meets this goal, but it’s the standard of assessing exemplars.
relational categories
Members represented by relational structures or roles within those structures
Schema-Governed Category
names the system
difference between relational and feature-based categories
relational categories summary
Theories of relational concepts emphasize representational pluralism: there are multiple kinds of concepts that may be represented in different manners for different purposes
Relational categories and feature-based categories are different:
Exemplar similarity
– Relational category exemplars seen as less similar
Extrinsic vs. intrinsic properties
– Reflected in text online
Ideals vs. central tendencies/prototypes
-ideals more important for relational
Schemas and roles are representationally linked
– Novel relational structures license novel roles.
structured vs unstructured approaches of AI
old-fashioned AI: based on structured theory, serial, step by step logical problem solving, compositional structured representations, discrete representational elements
Deep learning: