Learning Objectives
11-1
Explain the basic structure and purpose of language
11-2
Describe how language can have ambiguity and how these issues are resolved
11-3
Describe the process of “parsing” and its role in comprehension
11-4
Evaluate how prediction and inference are important factors for language
11-5
Describe the phenomenon of “common ground” as it relates to conversation
11-6
Analyze similarities and differences between common ground and syntactic coordination as tools for communication
11-7
Explain the impacts of bilingualism on cognition
11-8
Describe disfluencies and their role in language
11-9
Analyze the connections between language and music
Language
A system of communication using sounds or symbols that enables us to express our feelings, thoughts, ideas, and experiences.
This definition of language captures the idea that the ability to string sounds and words together opens the door to a world of communication
Communication
is a system by which information is exchanged between individuals. Let’s consider the examples provided above. Dogs bark and growl, both auditory methods of communication. Similarly, monkeys use a variety of vocal sounds to communicate with others. Verbal communication is quite common in the animal kingdom, ranging from mammals to birds to amphibians to insects (Figure 11.1a). However, communication does not have to be verbal.
Behavior is another common form of communication
are communication and language the same thing?
Communication is an umbrella term that includes complex language. However, communication can come in many forms that do not meet the requirements of a formal language. For instance, dogs “bark” when they want attention; monkeys have a repertoire of “calls” that stand for things such as “danger” or “greeting”; and bees perform a “waggle dance” at the hive that indicates the location of flowers.
Behavior is another common form of communication
For instance, bees use complex dances to indicate locations of food to others; dogs use a variety of behaviors (tail wagging, ear position, showing teeth) to communicate; birds use plumage for defense and courtship dances; and some fish can change color and display fins.
Behavior as a communication tool is quite common in the animal kingdom. Primates, including humans, utilize many behavioral methods of communication, including facial expressions, gestures, and posture. S
ome human behaviors commonly used for communication are fairly unique in the animal kingdom, such as blushing when embarrassed, pointing to direct attention, and tear production when sad (crying). “Fairly unique” does not mean we are the only species that communicate in these ways.
Many animals change color for a variety of reasons, and using behavior to guide attention can even be taught to dogs (Carballo et al., 2016).
Some other mammals may produce tears in response to strong emotions; however, most systematic study on this topic has concluded that if other species produce tears related to emotion, it is exceptionally rare.
In addition to communication through sound and behavior, many species also use chemicals to share information
Insects, like ants, use pheromones to create foraging trials; honeybees use pheromones to regulate social behavior; and moths use pheromones to attract mates over long distances. Chemical communication is not limited to insects though. Many other species, including many mammals, also use pheromones to communicate. Rodents use pheromones for territory marking, mating, and social bonding; dogs and other carnivores use pheromones for territorial boundaries and reproductive status; and many primates use pheromones for social communication. Whether humans are impacted by pheromones is still debated.
Communication is ubiquitous in the animal kingdom.
Many animals make sounds, perform behaviors, and produce smells to share information with others. However, are any non-human animals using language? Communication encompasses a broad range of methods of sharing information between individuals or groups. Language is a specific form of communication characterized by its structured system of specific rules and abstract symbols. For a variety of evolutionary and physiological reasons, humans are the only species with the brain physiology (large prefrontal cortex, Broca’s area, Wernicke’s area) and production physiology (tongue, larynx, etc.) to invent, produce, and comprehend complex, formal language.
As we will explore, human languages have far more in common with one another than they have differences. Language is a human-specific ability. As impressive as some animal communication is, it is much more rigid than human language. Animals use a limited number of sounds or gestures to communicate about a limited number of things that are important for survival. In contrast, humans use a wide variety of signals, which can be combined in countless ways. One of the properties of human language is, therefore, creativity.
alphabet
is a standardized set of “letters” or symbols representing individual sounds (phonemes) used in that language.
Not all languages use an alphabet, however.
Some languages use syllabaries where syllables are represented with symbols instead of phonemes.
The Native American language of Cherokee and the Japanese language of Kana are examples of languages that use syllabaries.
Other languages do not use alphabets or syllabaries—and instead use logographic systems where symbols represent whole words and bound morphemes, such as “un” as in “unhappy” or “ness” as in “happiness.”
Chinese Hanzi characters are an example of a logographic system.
Ancient Egyptian hieroglyphs and Mayan glyphs also used logographic systems.
Globally, some languages combine features of alphabets, syllabaries, and logographic systems, and some languages have no written form at all.
In other words, there is a tremendous diversity in format between languages.
The English (or Latin) alphabet can also be represented in other modalities when language needs to be communicated in other formats.
For instance, written and spoken languages require seeing and hearing letters, respectively.
Since some people cannot hear or see, sign language and Braille were created.
Sign language uses hand signals, gestures, and other behaviors to represent language for people who cannot see.
Braille uses a tactile pattern that can be felt by the fingers to represent language. Braille has an additional advantage.
The Creativity of Human Language
Human language provides a way of arranging a sequence of signals—sounds for spoken language, letters and written words for written language, tactile symbols for Braille, and physical signs for sign language—to transmit, from one person to another, things ranging from the simple and commonplace (“My car is over there”) to messages that have perhaps never been previously written or uttered in the entire history of the world (“My trip with Zelda, my cousin from California who lost her job in February, was on Groundhog Day”). The first step to understanding the creativity behind human language is to consider the alphabet.
hierarchical nature of language and The rule-based nature of language
Language makes it possible to create new and unique sentences because it has a structure that is both hierarchical and governed by rules.
The hierarchical nature of language proposes that language consists of a series of small components that can be combined to form larger units. For example, words can be combined to create phrases, which in turn can create sentences that can become components of a story.
The rule-based nature of language means that these components can be arranged in certain ways (“What is my cat saying?” is permissible in English), but not in other ways (“Cat my saying is what?” is not).
These two properties—a hierarchical structure and rules—endow humans with the ability to communicate whatever we want to express.
The Universal Need to Communicate with Language
People do sometimes “talk” to themselves. In fact, “talking to yourself” is what you are doing when you are using the phonological loop to rehearse a phone number held in working memory long enough to write it down.
However, language is primarily used for communication, whether it be conversing with another person or reading what someone has written. The human need to communicate using language has been called “universal” because it occurs wherever there are people.
For example, consider the following:
People’s need to communicate is so powerful that when deaf children find themselves in an environment where nobody speaks or uses sign language, they invent a sign language themselves. However, it is important to note that deaf children need to master at least one complete language to reach their cognitive potential
All humans with normal capacities develop a language and learn to follow its complex rules, even though they are usually not aware of these rules. Although many people find the study of grammar to be very difficult, they have no trouble using language.
Language is universal across cultures. There are more than 5,000 different languages, and there is not a single culture without language. When European explorers first set foot in New Guinea in the 1500s, the people they encountered, who had previously been isolated from the rest of the world, had developed more than 750 languages, many of them quite different from one another.
Language development is similar across cultures. No matter what the culture or the particular language, children generally begin “cooing” (vowel sounds) around 2 months old, “babbling” (new consonant sounds) at about 7 months, say a few meaningful words by their first birthday, and express the first multiword utterances occur around age 2 (Levelt, 2001).
Even though many languages are very different from one another, we can describe them as being “unique but the same.” They are unique in that they use different words and sounds, and they may use different rules for combining these words (although many languages use similar rules). They are the same in that all languages have words that serve the functions of nouns and verbs, and all languages include a system to make things negative, to ask questions, and to refer to the past and present.
Revisiting the Imagery Debate: Vision and Language
Think back to the imagery debate from the previous chapter. We noted that humans typically use a combination of language (propositional) and vision (depictive and spatial) information when thinking and using imagination. In this chapter, we discuss language in detail; however, we must first consider how language weighs into the imagery debate. For most people, our thoughts are a combination of visual and language-based information. Take, for example, a time when you were remembering an important conversation. You probably felt like you were hearing the conversation that was had and like you could see the person with whom you were conversing.
However, we also know there are individual differences in cognitive abilities. In the previous chapter, we discussed how some people cannot intentionally generate mental images; this condition is called aphantasia. These individuals are often able to compensate by using other techniques, such as language, to compensate for this deficit. Similarly, there is also individual variation in internal dialogue abilities. As with aphantasia, these differences may come with a cost.
For instance, individuals with reduced internal dialogue may have deficits in creative achievement, originality during divergent thinking, and production of diverse responses (Rooij, 2023). Similar to individuals with aphantasia, individuals with reduced internal dialogue likely compensate with other techniques, such as visual imagery. However, research in this area is new and ongoing.
Studying Language
The modern scientific study of language traces its beginnings to the work of Paul Broca (1861) and Carl Wernicke (1874).
In 1957, B. F. Skinner, the main proponent of behaviorism, published a book called Verbal Behavior, in which he proposed that language is learned through reinforcement. According to this idea, just as children learn appropriate behavior by being rewarded for “good” behavior and punished for “bad” behavior, children learn language by being rewarded for using correct language and punished (or at least not rewarded) for using incorrect language.
In the same year, linguist Noam Chomsky (1957) published a book titled Syntactic Structures, in which he proposed that human language is coded in the genes.
According to this idea, just as humans are genetically programmed to walk, they are also programmed to acquire and use language.
Chomsky concluded that despite the wide variations that exist across languages, the underlying basis of all language is similar.
Most important for our purposes, Chomsky saw studying language as a means to studying the properties of the mind and therefore disagreed with the behaviorist idea that the mind is not a valid topic of study for psychology.
Chomsky’s disagreement with behaviorism led him to publish a scathing review of Skinner’s Verbal Behavior in 1959. In his review, he presented arguments against the behaviorist idea that language can be explained in terms of reinforcements and without reference to the mind.
As we discussed in Chapter 1, Chomsky argued that as children learn language, they produce sentences that they have never heard and that have never been reinforced.
Chomsky’s criticism of behaviorism was an important event in the cognitive revolution and began changing the focus of the young discipline of psycholinguistics , the field concerned with the psychological study of language.
psycholinguistics
the field concerned with the psychological study of language.
The goal of psycholinguistics is to discover the psychological processes by which humans acquire and process language.
The four major concerns of psycholinguistics are as follows:
Comprehension. How do people understand spoken and written language? This includes how people process language sounds; how they understand words, sentences, and stories expressed in writing, speech, or sign language; and how people have conversations with one another.
Representation. How is language represented in the mind? This representation includes how people group words into phrases to create meaningful sentences and how they make connections between different parts of a story.
Speech production. How do people produce language? This includes the physical processes of speech production and the mental processes that occur as a person creates speech.
Acquisition. How do people learn language? This includes both how children learn language and how people learn additional languages, either as children or later in life.
Lexicon
A person’s knowledge of what words mean, how they sound, and how they are used in relation to other words.
Mental dictionary
Semantics
The meanings of words and sentences. Distinguished from syntax.
Lexical semantics: the meaning of words
Not All Words Are Created Equal: Differences in Frequency
Some words occur more frequently than others in a particular language. For example, in English, home occurs 547 times per million words while hike occurs only four times per million words. The frequency with which a word appears in a language is called word frequency , and the word frequency effect refers to the fact that we react more rapidly to high-frequency words like home than to low-frequency words like hike. The reason this is important is because a word’s frequency influences how we process the word.
lexical decision task
One way to illustrate processing differences between high- and low-frequency words is to use a lexical decision task in which the task is to decide as quickly as possible whether strings of letters are words or nonwords. Try this task for the following four words: reverie, cratily, history, garvola. Note that there were two real words, reverie, which is a low-frequency word, and history, which is a high-frequency word. Research using the lexical decision task has demonstrated slower response to low-frequency words.
The slower response for low-frequency words has also been demonstrated by measuring people’s eye movements while reading. Keith Rayner and Susan Duffy (1986) measured participants’ eye movements and the durations of the fixations that occur as the eye pauses at a particular place (see Chapter 4) while they read sentences that contained either a high-frequency or a low-frequency target word, where frequency refers to how often a word occurs in normal language usage. The average frequencies were 5.1 times per million for the low-frequency words and 122.3 times per million for the high-frequency words.
For example, the low-frequency target word in the sentence “The slow waltz captured their attention” is waltz, and replacing waltz with the high-frequency word music creates the sentence “The slow music captured their attention.” The duration of the first fixation on the words, shown in Figure 11.4a, was 37 milliseconds longer for low-frequency words compared to high-frequency words. (Sometimes a word might be fixated more than once, as when the person reads a word and then looks back at it in response to what the person has read later in the sentence.) Figure 11.4b shows that the total gaze duration—the sum of all fixations made on a word, was 87 milliseconds longer for low-frequency words than for high-frequency words.
One reason for these longer fixations on low-frequency words could be that the readers needed more time to access the meaning of the low-frequency words.
The word frequency effect, therefore, demonstrates how our previous experiences with words can influence our ability to access their meaning.
The Pronunciation of Words Is Variable
Another problem that makes understanding words challenging is that not everyone using the same language pronounces words in the same way. People talk with different accents and at different speeds, and, most important, people often take a relaxed approach to pronouncing words when they are speaking naturally.
For example, analysis of how people actually speak has determined that there are around 50 different ways to pronounce the word the.
So how do we deal with this complexity?
One way is to use the context within which the word appears.
The fact that context helps is illustrated by what happens when you hear a word taken out of context.
Irwin Pollack and J. M. Pickett (1964) showed that words are more difficult to understand when taken out of context and presented alone, by recording the conversations of participants who sat in a room waiting for the experiment to begin.
When the participants were then presented with recordings of single words taken out of their own conversations, they could identify only half the words, even though they were listening to their own voices!
The fact that the people in this experiment were able to identify words as they were talking to each other but could not identify the same words when the words were isolated, illustrates that their ability to perceive words in conversations is aided by the context provided by the words and sentences that make up the conversation.
There Are No Silences Between Words in Normal Conversation
The fact that the sounds of speech are easier to understand when we hear them spoken in a sentence is particularly amazing when we consider that, unlike the words you are now reading that are separated by spaces and punctuation, words spoken in a sentence are usually not separated by silence. This result is not what we might expect because when we listen to someone speak, we usually hear the individual words, and sometimes it may seem as if there are silences that separate one word from another. However, remember our discussion in Chapter 3 in which we noted that a record of the physical energy produced by conversational speech reveals that there are often no physical breaks between words in the speech signal or that breaks can occur in the middle of words.
speech segmentation
experiment by Jennifer Saffran and colleagues (2008), which showed that infants are sensitive to statistical regularities in the speech signal—the way that different sounds follow one another in a particular language and how knowing these regularities helps infants achieve speech segmentation —the perception of individual words even though there are often no pauses between words.
We use the statistical properties of language all the time without realizing it. For example, we have learned that certain sounds are more likely to follow one another within a word, and some sounds are more likely to follow each other in different words. Consider the words pretty baby. In English, it is likely that pre will follow ty in the same word (pre-ty) and that ty and ba will be separated into two different words (pretty baby).
Another thing that aids speech segmentation is our knowledge of the meanings of words. In Chapter 3, we pointed out that when someone listens to a language with which they are unfamiliar, it is often difficult to distinguish one word from the next, but if they know a language, individual words stand out (see page 72). This observation illustrates that knowing the meanings of words helps when we perceive them. Perhaps you have had the experience of hearing individual words that you happen to know in a language with which you are not fluent seemingly “pop out” from what appears to be an otherwise continuous stream of not comprehended speech.
Another example of how meaning is responsible for organizing sounds into words is provided by these two responses to the question, “How did the soccer game go?”
I kick it enough
I can’t get enough
Both responses can be pronounced in almost exactly the same way, so hearing them differently depends on the overall meaning of the sentence in which these words appear. This example is similar to the familiar “I scream, you scream, we all scream for ice cream” that many people learn as children. The sound stimuli for “I scream” and “ice cream” are identical, so the different organizations must be achieved by the meaning of the sentence in which these words appear.
Our ability to hear and understand spoken words is affected by:
How frequently we have encountered a word in the past
The context in which the words appear
Our knowledge of statistical regularities of our language
Our knowledge of word meanings
There is an important message here: All of these involve knowledge achieved by learning/experience with language. We will learn that prior knowledge is critical regarding language as we consider how we understand sentences, stories, and conversations. However, we are not quite done with words yet. There is one “problem” we have not yet discussed: Many words have multiple meanings.
Lexical ambiguity
When a word can have more than one meaning.. For example, bug can mean an insect, a listening device, to annoy, or a problem in a computer program.
When ambiguous words appear in a sentence, we usually use the context of the sentence to determine which definition applies. For example, if Loanna says, “My mother is bugging me,” we can be pretty sure that bugging refers to the fact that Loanna’s mother is annoying her, as opposed to sprinkling insects on her or installing a hidden listening device in her room (although we might need further context to rule out this last possibility).
Michael Tanenhaus and colleagues (1979)
The examples for bug indicate that context often clears up ambiguity so rapidly that we are not aware of its existence. However, research has shown that something interesting happens in the mind right after a word is heard. Michael Tanenhaus and colleagues (1979) showed that people briefly access multiple meanings of ambiguous words before the effect of context takes over. They did this task by presenting participants with a tape recording of short sentences such as She held the rose, in which the target word rose is a noun referring to a flower, or They all rose, in which rose is a verb referring to people standing up.
Tanenhaus and colleagues wanted to determine what meanings of rose occurred in a person’s mind for each of these sentences. To do this, they used a procedure called lexical priming.
Tanenhaus and colleagues measured lexical priming using two conditions:
(1)
The noun-noun condition: a word is presented as a noun followed by a noun probe stimulus; and
(2)
The verb-noun condition: a word is presented as a verb followed by a noun probe stimulus.
For example, in Condition 1, participants would hear a sentence like She held a rose, in which rose is a noun (a type of flower), followed immediately by the probe word flower. Their task was to read the probe word as quickly as possible. The time that elapsed between the end of the sentence and when the participant began saying the word is the reaction time.
To determine if presenting the word rose caused a faster response to flower, a control condition was run in which a sentence like She held a post was followed by the same probe word, flower. Because the meaning of post is not related to the meaning of flower, priming would not be expected and did not occur. As shown in the left bar in Figure 11.5a, the word rose, used as a flower, resulted in a 37 millisecond faster response to the word flower than in the control condition. This outcome is what we would expect because rose, the flower, is related to the meaning of the word flower.
Tanenhaus’s results become more important when we consider Condition 2, when the sentence was They all rose, in which rose is a verb (people getting up) and the probe word was still flower. The control for this sentence was They all touched. The result, shown in the right bar in Figure 11.5a, shows that priming occurred in this condition as well. Even though rose was presented as a verb, it still caused a faster response to flower!
What this means is that the “flower” meaning of rose is activated immediately after hearing rose, whether it is used as a noun or a verb. Tanenhaus also showed that the verb meaning of rose is activated whether it is used as a noun or a verb, and concluded from these results that all of an ambiguous word’s meanings are activated immediately after the word is heard.
To make things even more interesting, when Tanenhaus ran the same experiment but added a delay of 200 milliseconds between the end of the sentences and the probe word, the result changed. As shown in Figure 11.5b, priming still occurs for Condition 1—rose the noun primes flower—but no longer occurs for Condition 2—rose the verb does not prime flower. What this means is that by 200 milliseconds, after hearing the word rose as a verb, the flower meaning of rose is gone. Therefore, it seems the context provided by a sentence can help determine the meaning of a word—but only after a slight delay during which other meanings of a word are briefly accessed.