In this lecture, Professor Sapolsky discusses the neurological foundation of language, with particular focus on aphasias within Broca's and Wernicke's areas. He discusses elements of sign language and what that demonstrates about the nature of language itself, which is that the structures operate for linguistic purposes, not only for the motoric purpose of coordinating the muscles of the tongue and jaw. He also discusses elements of language from an evolutionary perspective and talks about some of the most famous primate subjects, including Nim Chimpsky and Koko the ape.
He begins the lecture by wrapping up material from the last lecture. He notes that the process to solve the Traveling Salesman problem (how to get to all the sales stops in the most efficient way possible) involves radial glia extending out and then neurons tracing and forming patterns from there. The goal is to form as many connections as possible while creating the shortest distance possible among all the axons.
Semanticity is covered, by which he means there is an endless array of possible human sounds, but all of the known human languages - approximately 6,000 at the time of the lecture - involve the creation of rules (buckets) that constrain the sounds into meaning and establish which sounds are language. All languages have embedded clauses, which means there are different conditions that will impact meaning (similar to an if-then structure). They also all have recursion, or generativity, meaning there are a finite number of words but an infinite potential for generating new combinations.
Displacement is another feature of all human language. People can talk about things from the past or future or that are happening elsewhere. This is distinguished from other animals in that animals' communication is driven by emotion and tied up in the here and now. Humans also have the ability to use language arbitrarily, for example we can express emotions in whatever way we choose.
All languages feature "motherese" - speaking in a high pitched voice, focusing on the melody and tone and emphasizing pronunciation. He notes that when people speak this way to pets they usually don't provide the close clarity whereas they do when speaking to children.
Next he raises the important question of whether the structures for language are primarily motoric or are about the underlying cognition. He indicates that it's primarily the latter and provides examples from American Sign Language to establish this point.
1. Deaf babies that are learning ASL begin "babbling" with sign language at the same developmental time (around nine months of age) that hearing babies begin to babble, with both doing this the most right before going to sleep.
2. Older individuals that speak ASL experience similar communication difficulties as a non-deaf person after a stroke depending on which area of the brain was hit.
3. Both spoken and ASL have prosody, which is the meaning and emotional tone of a message that is separate from the words itself, i.e. tone, irony, humor, etc. all the ways in which the ultimate meaning comes from more than just the words. in ASL facial expressions are important, as well as shifting the body one way and then the other when relating dialogue.
4. For those born deaf, when they learn sign language their auditory cortex lights up, even though it's not being physically stimulated via sounds.
5. There are accents in ASL, as well as puns and poetry.
Neurobiology. He begins by discussing the question of how modular the language activities are within the cortex. This comes down to whether it's a specialized function or one that is generally similar to other elements within the cortex.
1. He mentions kids with Williams Syndrome who are very fluent in their creation and use of words, but who generally have IQ's in the borderline retarded area. He queries - how can it be functional when the rest of the brain is a mess?
2. Second, he notes there are genetic disorders in which people with normal IQ's (and no stroke damage), will have a harder time producing certain elements of language.
Both points are used to demonstrate that it's a separate module that is not readily comparable to the other functions of the brain. The Swiss Army knife analogy is probably a bad one since it's confusing in nature - you can view each part of the knife as a specific function or each part as being part of the Swiss Army knife and thus similar (for example, similar properties of tensile strength and construction). In the end the conclusion is reached that language is its own cognitive function.
However, much of what the kids with Williams kids say doesn't make sense and the folks with the genetic impairments turn out to not have normal IQ's. This remains a controversial issue and clearly has major implications because of the impact of stroke damage - if it's not modular and only those areas can handle the function, then what's lost may be gone for good. But if it is modular (i.e. can be moved around), then other brain areas may be able to adapt and take on its functions.
(Here it's worth noting that Chomsky has argued that language development is similar to any other bodily function, such as the growth of kidneys. From the standpoint of it being an inherent process, odds are that it's more likely to be a unique structure. The deficits mentioned in the Williams kids may not be meaning so much as understanding. To wit they may be expressing what they want to, but the general IQ deficits restrict the accuracy of those comments. This is different than someone with an aphasia who cannot express what they mean to.)
Language is lateralized, that is it's handled by only one side of the brain, usually (90%) the left side. Early evidence of this came from the Wada test, in which one hemisphere is anesthetized. People undergoing this brief test would lose language function when the left side was frozen. The Wada test was done prior to surgery on intractable epileptics as the surgeons wanted to make sure the site wasn't too close to language centers. These days brain imaging enables doctors to research this without using barbiturates.
Broca's area is on the cortex in the parietal lobe. Broca's area is responsible for language production, moving lips, jaw, larynx. It handles motoric regulation. When damage occurs, you get an aphasia. This becomes a production aphasia. They will then have trouble generating (physically generating) meaningful words and speech. They remain good at language comprehension.
Wernicke's area is responsible for language comprehension. People with a Wernicke's aphasia will remain able to fluidly generate words and speech, but the speech will often resemble a word salad and make little sense to the listener.
The arcuate fasciculus connects posterior receptive areas with premotor/motor areas. Professor Sapolsky notes that it connects Broca's area to Wernicke's area, but we have a recent article cited at Wikipedia (http://en.scientificcommons.org/50118943) that suggests this is not the case. It was previously thought to be a connecting bundle. In the end his example stills makes sense because the connection relates to motoric activity, so one could comprehend but not produce if the arcuate fasciculus is responsible for connecting to motoric areas. He mentions that generally stroke victims suffer deficits in both areas.
Semanticity is covered, by which he means there is an endless array of possible human sounds, but all of the known human languages - approximately 6,000 at the time of the lecture - involve the creation of rules (buckets) that constrain the sounds into meaning and establish which sounds are language. All languages have embedded clauses, which means there are different conditions that will impact meaning (similar to an if-then structure). They also all have recursion, or generativity, meaning there are a finite number of words but an infinite potential for generating new combinations.
Displacement is another feature of all human language. People can talk about things from the past or future or that are happening elsewhere. This is distinguished from other animals in that animals' communication is driven by emotion and tied up in the here and now. Humans also have the ability to use language arbitrarily, for example we can express emotions in whatever way we choose.
All languages feature "motherese" - speaking in a high pitched voice, focusing on the melody and tone and emphasizing pronunciation. He notes that when people speak this way to pets they usually don't provide the close clarity whereas they do when speaking to children.
Next he raises the important question of whether the structures for language are primarily motoric or are about the underlying cognition. He indicates that it's primarily the latter and provides examples from American Sign Language to establish this point.
1. Deaf babies that are learning ASL begin "babbling" with sign language at the same developmental time (around nine months of age) that hearing babies begin to babble, with both doing this the most right before going to sleep.
2. Older individuals that speak ASL experience similar communication difficulties as a non-deaf person after a stroke depending on which area of the brain was hit.
3. Both spoken and ASL have prosody, which is the meaning and emotional tone of a message that is separate from the words itself, i.e. tone, irony, humor, etc. all the ways in which the ultimate meaning comes from more than just the words. in ASL facial expressions are important, as well as shifting the body one way and then the other when relating dialogue.
4. For those born deaf, when they learn sign language their auditory cortex lights up, even though it's not being physically stimulated via sounds.
5. There are accents in ASL, as well as puns and poetry.
Neurobiology. He begins by discussing the question of how modular the language activities are within the cortex. This comes down to whether it's a specialized function or one that is generally similar to other elements within the cortex.
1. He mentions kids with Williams Syndrome who are very fluent in their creation and use of words, but who generally have IQ's in the borderline retarded area. He queries - how can it be functional when the rest of the brain is a mess?
2. Second, he notes there are genetic disorders in which people with normal IQ's (and no stroke damage), will have a harder time producing certain elements of language.
Both points are used to demonstrate that it's a separate module that is not readily comparable to the other functions of the brain. The Swiss Army knife analogy is probably a bad one since it's confusing in nature - you can view each part of the knife as a specific function or each part as being part of the Swiss Army knife and thus similar (for example, similar properties of tensile strength and construction). In the end the conclusion is reached that language is its own cognitive function.
However, much of what the kids with Williams kids say doesn't make sense and the folks with the genetic impairments turn out to not have normal IQ's. This remains a controversial issue and clearly has major implications because of the impact of stroke damage - if it's not modular and only those areas can handle the function, then what's lost may be gone for good. But if it is modular (i.e. can be moved around), then other brain areas may be able to adapt and take on its functions.
(Here it's worth noting that Chomsky has argued that language development is similar to any other bodily function, such as the growth of kidneys. From the standpoint of it being an inherent process, odds are that it's more likely to be a unique structure. The deficits mentioned in the Williams kids may not be meaning so much as understanding. To wit they may be expressing what they want to, but the general IQ deficits restrict the accuracy of those comments. This is different than someone with an aphasia who cannot express what they mean to.)
Language is lateralized, that is it's handled by only one side of the brain, usually (90%) the left side. Early evidence of this came from the Wada test, in which one hemisphere is anesthetized. People undergoing this brief test would lose language function when the left side was frozen. The Wada test was done prior to surgery on intractable epileptics as the surgeons wanted to make sure the site wasn't too close to language centers. These days brain imaging enables doctors to research this without using barbiturates.
Broca's area is on the cortex in the parietal lobe. Broca's area is responsible for language production, moving lips, jaw, larynx. It handles motoric regulation. When damage occurs, you get an aphasia. This becomes a production aphasia. They will then have trouble generating (physically generating) meaningful words and speech. They remain good at language comprehension.
Wernicke's area is responsible for language comprehension. People with a Wernicke's aphasia will remain able to fluidly generate words and speech, but the speech will often resemble a word salad and make little sense to the listener.
The arcuate fasciculus connects posterior receptive areas with premotor/motor areas. Professor Sapolsky notes that it connects Broca's area to Wernicke's area, but we have a recent article cited at Wikipedia (http://en.scientificcommons.org/50118943) that suggests this is not the case. It was previously thought to be a connecting bundle. In the end his example stills makes sense because the connection relates to motoric activity, so one could comprehend but not produce if the arcuate fasciculus is responsible for connecting to motoric areas. He mentions that generally stroke victims suffer deficits in both areas.
Similar deficits are experienced in deaf individuals who suffer strokes. It's not about the physical production of speech; it's about the underlying cognition. Broca's and Wernicke's light up when a deaf individual is listening to ASL.
There are also function specific aphasias and alexias. These are caused by smaller strokes that hit specific segments within the language center. Examples include a gerund aphasia, an inability to write despite understanding words, and a sailor that lost his ability to comprehend semaphores.
Broca's and Wernicke's areas have slightly different activation patterns in pictograph languages, such as Chinese.
He notes that although language is lateralized to the left hemisphere most of the time, the prosody comes in from the right hemisphere. Prosody damage is seen in some stroke victims.
Basal ganglia is next up. It's responsible in part for motivating motor stuff, especially after communication has run through the cortex and limbic system. It's also involved in gesticulation, which is virtually automatic and seen in blind individuals as well as seeing folks that are talking on the phone. Gestures are part of emotional expression.
The limbic system is also involved in communication, especially of emotional elements. Stroke victims will sometimes have success singing thoughts that they struggled to produce due to aphasias. Accuracy is not high, but it's better than spoken speech. Tourette's Syndrome is an example of the limbic system gone wrong, as seen in its most well known feature - uncontrolled emotional outbursts, especially cursing. So the limbic system is intertwined with language production. This is further seen in cases where subcortical areas are stimulated and the subject says something emotionally loaded. The limbic system also communicates with the right hemisphere and effects prosody.
Humans are the only species that has Broca and Wernicke areas, but you can see the beginnings of this structure in other primates, including apes, chimps, orangutans and rhesus monkeys (there is cortical thickening). He notes evidence of lateralization in monkeys, including a tendency to show more facial features and movement on the right side of the body when expressing an emotion. Studies of Australopithecus skulls show some asymmetry, though it's hard to infer much from a skull given that the brain is long gone. This points toward a long timeline in the development of this skill.
B.F. Skinner and Noam Chomsky are covered next. Skinner argued that language developed through behavioral techniques, essentially positing that correct language usage was positively reinforced and thus became more likely in the future. This argument makes very little sense and is undermined by the pace of language acquisition, natural developmental timelines that normal kids go through, production of entirely new combinations of words and sentences. Chomsky, thankfully, countered this argument and put an end to a lot of the behaviorism nonsense.
There are also function specific aphasias and alexias. These are caused by smaller strokes that hit specific segments within the language center. Examples include a gerund aphasia, an inability to write despite understanding words, and a sailor that lost his ability to comprehend semaphores.
Broca's and Wernicke's areas have slightly different activation patterns in pictograph languages, such as Chinese.
He notes that although language is lateralized to the left hemisphere most of the time, the prosody comes in from the right hemisphere. Prosody damage is seen in some stroke victims.
Basal ganglia is next up. It's responsible in part for motivating motor stuff, especially after communication has run through the cortex and limbic system. It's also involved in gesticulation, which is virtually automatic and seen in blind individuals as well as seeing folks that are talking on the phone. Gestures are part of emotional expression.
The limbic system is also involved in communication, especially of emotional elements. Stroke victims will sometimes have success singing thoughts that they struggled to produce due to aphasias. Accuracy is not high, but it's better than spoken speech. Tourette's Syndrome is an example of the limbic system gone wrong, as seen in its most well known feature - uncontrolled emotional outbursts, especially cursing. So the limbic system is intertwined with language production. This is further seen in cases where subcortical areas are stimulated and the subject says something emotionally loaded. The limbic system also communicates with the right hemisphere and effects prosody.
Humans are the only species that has Broca and Wernicke areas, but you can see the beginnings of this structure in other primates, including apes, chimps, orangutans and rhesus monkeys (there is cortical thickening). He notes evidence of lateralization in monkeys, including a tendency to show more facial features and movement on the right side of the body when expressing an emotion. Studies of Australopithecus skulls show some asymmetry, though it's hard to infer much from a skull given that the brain is long gone. This points toward a long timeline in the development of this skill.
B.F. Skinner and Noam Chomsky are covered next. Skinner argued that language developed through behavioral techniques, essentially positing that correct language usage was positively reinforced and thus became more likely in the future. This argument makes very little sense and is undermined by the pace of language acquisition, natural developmental timelines that normal kids go through, production of entirely new combinations of words and sentences. Chomsky, thankfully, countered this argument and put an end to a lot of the behaviorism nonsense.
Here is Chomsky's famous article reviewing B.F. Skinner's language acquisition theory, http://www.chomsky.info/articles/1967----.htm.
This debate was significant less because of the merits of the case - Chomsky always had the edge there - than because behaviorism had been dominating the field of psychology for an extended period of time. The battle between Skinner and Chomsky highlighted many of the problematic issues with behaviorism, especially the difference between shaping and internal development. Behaviorists had little to say about the internal workings of the mind, a weakness that left them ill suited to provide the leading theories in the field of psychology. This battle over language was a symbolic Waterloo for Skinner and enabled psychology to move forward in more constructive, scientific directions.
Professor Sapolsky points out that Chomsky's argument included the ability to create new language constructions which could not have been previously reinforced. This is evidence of internal workings that are not shaped and molded by rewards. This is referred to as generativity of language. New sentences, words, creations. He also argues that there is an innate structure of language in that kids are able to generalize the rules of language, such as grammar and syntax. Language acquisition is further supported along the Chomsky model based on the Poverty of Stimulus model, which suggests that more language is generated than heard or rewarded.
Important to see in the Chomsky model, is the differential development of language. Not only do people differ in their language skills, but language acquisition has different stages (Professor Sapolsky does note that young kids pick up about 10 words a day but end up with a vocabulary around 60,000 at college age - clearly a change in pace). And, of course, the brain studies that show specific centers for language production and meaning further Chomsky's point that it's an internal thing. Kids picking up language from the ambient environment, accents, difficulties learning a second language as you get older and other elements point to an internal mechanism that is set to develop on its own and has critical learning stages. As kids age they lose the ability distinguish between phonemes that aren't relevant to their own language. Brain imaging studies show that Wernicke's area does not light up when these subtle phonemic differences are tested. A child in the other culture would notice these.
If you learn a language past age 12, you're likely to have an accent. If you learn a second language before 6, both languages are coded for in a similar pattern within Broca's and Wernicke's. Learn it after 6 and the language sections are more peripheral. There are some bizarre cases in which a stroke victim will lose one language but not the other.
Professor Sapolsky states that new languages are invented by kids. As an example, deaf kids in Nicaragua generated a system of their own when they were left to work each other in the school. This evolved into Nicaraguan sign language. In other words, the possibility of language is inherent - if they do not have language, kids will naturally create communication systems. It took about three generations for the sign language to evolve rules, embedded clauses, etc. So words are primary but structure soon follows.
At roughly 12-16 weeks of age you begin to see different development in the fetus on the left side where Broca's and Wernicke's areas are compared to the right side, which does not thicken the same way. The thickening is seen by about 30 weeks. Enhanced metabolic activity is not seen within the first couple of years of the kid's life, but begins to emerge afterward.
Judith Rich Harris, argues that peer influence is more important than parental influence in the development of language. A key example of this is that kids grow up with the accent of the neighborhood around them, not the accent of their immediate family. This is most conspicuous when examining first generation immigrants.
Language is also laden with cultural meaning. For example, some languages include a formal and informal you. Additionally kinship languages and even how stories are told depend on cultural values inherent in that language system. This goes back to the Sapir-Whorf hypothesis that language constrains thought and shapes thought. The counterargument is that the nature of a culture's thoughts shape its language.
He cites two amazonian tribes that have limited numbers in their number counting system, so they have a word for 1,2,3 and more than 3. The second tribe has words for 1,2,3,4,5 and more than 5. Their accuracy is fine up to those numbers, but beyond that 8 looks like 10 to them. The people are smart with vocabularies that include thousands of names for edible plants, so they function well within the domain of things that the culture is concerned with.
Animals have the beginnings of semanticity (in this case, meaning that a particular sound has an actual, consistent meaning). For example, vervet monkeys have vocalizations that mean "something scary above" and "something scary below" which are used to tell the troop whether to climb up or down a tree to be safe. Clearly getting this wrong would lead to trouble. At the same time, the underlying emotions are similar but the meanings are different.
Rhesus monkeys have been tested with clips of facial expressions with matching and not matching vocalizations. They become very intrigued when they do not match up.
Both vervets and squirrels give more alarm calls when relatives are around. Squirrels are even less likely to warn another squirrel they've been bickering with than one they haven't.
Humans are alone in their capacity to lie. Other animals can't fully do this and have to resort to tricks, such as a dog tucking its tail to try to prevent scared scents from escaping.
Chimp Vickie had the unfortunate task of needing to learn to speak. This 1950s research required her to make a controlled vocalization for anything she wanted, such as food or water. Naturally this was a difficult, if not impossible, task for her. Even worse, another researched, Kellogg from Yale, thought it would be a great idea to raise their child Donald with a chimp named Gua, maintaining the same environment for both. The thought was that the chimp would learn language from Donald. Instead, Donald began mimicking the chimp.
Finally researchers caught on that chimps lack the physiological structures to speak English. So the next subject up was Washo, a chimp that the Gardners began teaching ASL to. Washo would babble in sign language, used words like "waterbird," stole plants and blamed others. She and another chimp, Bowie, communicated, with both signalling "tickle me" until they both got frustrated and walked away from each other.
Next up, Penny Patterson and Koko the Gorilla. Penny got a loan from the San Francisco zoo. This research began at Stanford, with Patterson teaching Koko ASL. Koko could report dreams and gossip. In one humorous anecdote, Koko ate a plant and when questioned by Patterson, responded that "Bill ate it." Bill was another person working on the project. Patterson told Koko that Bill didn't do it, that only gorillas do. So Koko responded that some other gorilla ate the plant.
Around 1980, Herb Terrace at Columbia set out to prove Chomsky wrong and establish that chimps could, in fact, generate real language. So he got a chimp, named him Nim Chimpsky, and taught him ASL. A few years into it, Terrace and others published a landmark paper arguing that Nim Chimpsky was not producing language. He wasn't creating words, or getting word order right. Additionally, a fundamental element of language is that the more words there are, the more meaning. But with Nim's ASL efforts, his sentences were basically babbling. The utterances were not spontaneous (responses for Fruit Loops). Terrace ran the other projects through the same tests and concluded that none of the others were passing them either, including Washo and Koko.
Terrace and Patterson then engaged in a battle over the meaning. She'd long since run off with Koko, leaving the San Francisco Zoo shy one gorilla. At this point in time, the views argued by Terrace, and by extension Chomsky, remain the central view. But, of course, we keep looking and now a Bonobo chimp named Kanzi is being taught language and may be showing some signs of actual language use. (Bonobos are highly"social" and may be better adapted for communication).
This debate was significant less because of the merits of the case - Chomsky always had the edge there - than because behaviorism had been dominating the field of psychology for an extended period of time. The battle between Skinner and Chomsky highlighted many of the problematic issues with behaviorism, especially the difference between shaping and internal development. Behaviorists had little to say about the internal workings of the mind, a weakness that left them ill suited to provide the leading theories in the field of psychology. This battle over language was a symbolic Waterloo for Skinner and enabled psychology to move forward in more constructive, scientific directions.
Professor Sapolsky points out that Chomsky's argument included the ability to create new language constructions which could not have been previously reinforced. This is evidence of internal workings that are not shaped and molded by rewards. This is referred to as generativity of language. New sentences, words, creations. He also argues that there is an innate structure of language in that kids are able to generalize the rules of language, such as grammar and syntax. Language acquisition is further supported along the Chomsky model based on the Poverty of Stimulus model, which suggests that more language is generated than heard or rewarded.
Important to see in the Chomsky model, is the differential development of language. Not only do people differ in their language skills, but language acquisition has different stages (Professor Sapolsky does note that young kids pick up about 10 words a day but end up with a vocabulary around 60,000 at college age - clearly a change in pace). And, of course, the brain studies that show specific centers for language production and meaning further Chomsky's point that it's an internal thing. Kids picking up language from the ambient environment, accents, difficulties learning a second language as you get older and other elements point to an internal mechanism that is set to develop on its own and has critical learning stages. As kids age they lose the ability distinguish between phonemes that aren't relevant to their own language. Brain imaging studies show that Wernicke's area does not light up when these subtle phonemic differences are tested. A child in the other culture would notice these.
If you learn a language past age 12, you're likely to have an accent. If you learn a second language before 6, both languages are coded for in a similar pattern within Broca's and Wernicke's. Learn it after 6 and the language sections are more peripheral. There are some bizarre cases in which a stroke victim will lose one language but not the other.
Professor Sapolsky states that new languages are invented by kids. As an example, deaf kids in Nicaragua generated a system of their own when they were left to work each other in the school. This evolved into Nicaraguan sign language. In other words, the possibility of language is inherent - if they do not have language, kids will naturally create communication systems. It took about three generations for the sign language to evolve rules, embedded clauses, etc. So words are primary but structure soon follows.
At roughly 12-16 weeks of age you begin to see different development in the fetus on the left side where Broca's and Wernicke's areas are compared to the right side, which does not thicken the same way. The thickening is seen by about 30 weeks. Enhanced metabolic activity is not seen within the first couple of years of the kid's life, but begins to emerge afterward.
Judith Rich Harris, argues that peer influence is more important than parental influence in the development of language. A key example of this is that kids grow up with the accent of the neighborhood around them, not the accent of their immediate family. This is most conspicuous when examining first generation immigrants.
Language is also laden with cultural meaning. For example, some languages include a formal and informal you. Additionally kinship languages and even how stories are told depend on cultural values inherent in that language system. This goes back to the Sapir-Whorf hypothesis that language constrains thought and shapes thought. The counterargument is that the nature of a culture's thoughts shape its language.
He cites two amazonian tribes that have limited numbers in their number counting system, so they have a word for 1,2,3 and more than 3. The second tribe has words for 1,2,3,4,5 and more than 5. Their accuracy is fine up to those numbers, but beyond that 8 looks like 10 to them. The people are smart with vocabularies that include thousands of names for edible plants, so they function well within the domain of things that the culture is concerned with.
Animals have the beginnings of semanticity (in this case, meaning that a particular sound has an actual, consistent meaning). For example, vervet monkeys have vocalizations that mean "something scary above" and "something scary below" which are used to tell the troop whether to climb up or down a tree to be safe. Clearly getting this wrong would lead to trouble. At the same time, the underlying emotions are similar but the meanings are different.
Rhesus monkeys have been tested with clips of facial expressions with matching and not matching vocalizations. They become very intrigued when they do not match up.
Both vervets and squirrels give more alarm calls when relatives are around. Squirrels are even less likely to warn another squirrel they've been bickering with than one they haven't.
Humans are alone in their capacity to lie. Other animals can't fully do this and have to resort to tricks, such as a dog tucking its tail to try to prevent scared scents from escaping.
Chimp Vickie had the unfortunate task of needing to learn to speak. This 1950s research required her to make a controlled vocalization for anything she wanted, such as food or water. Naturally this was a difficult, if not impossible, task for her. Even worse, another researched, Kellogg from Yale, thought it would be a great idea to raise their child Donald with a chimp named Gua, maintaining the same environment for both. The thought was that the chimp would learn language from Donald. Instead, Donald began mimicking the chimp.
Finally researchers caught on that chimps lack the physiological structures to speak English. So the next subject up was Washo, a chimp that the Gardners began teaching ASL to. Washo would babble in sign language, used words like "waterbird," stole plants and blamed others. She and another chimp, Bowie, communicated, with both signalling "tickle me" until they both got frustrated and walked away from each other.
Next up, Penny Patterson and Koko the Gorilla. Penny got a loan from the San Francisco zoo. This research began at Stanford, with Patterson teaching Koko ASL. Koko could report dreams and gossip. In one humorous anecdote, Koko ate a plant and when questioned by Patterson, responded that "Bill ate it." Bill was another person working on the project. Patterson told Koko that Bill didn't do it, that only gorillas do. So Koko responded that some other gorilla ate the plant.
Around 1980, Herb Terrace at Columbia set out to prove Chomsky wrong and establish that chimps could, in fact, generate real language. So he got a chimp, named him Nim Chimpsky, and taught him ASL. A few years into it, Terrace and others published a landmark paper arguing that Nim Chimpsky was not producing language. He wasn't creating words, or getting word order right. Additionally, a fundamental element of language is that the more words there are, the more meaning. But with Nim's ASL efforts, his sentences were basically babbling. The utterances were not spontaneous (responses for Fruit Loops). Terrace ran the other projects through the same tests and concluded that none of the others were passing them either, including Washo and Koko.
Terrace and Patterson then engaged in a battle over the meaning. She'd long since run off with Koko, leaving the San Francisco Zoo shy one gorilla. At this point in time, the views argued by Terrace, and by extension Chomsky, remain the central view. But, of course, we keep looking and now a Bonobo chimp named Kanzi is being taught language and may be showing some signs of actual language use. (Bonobos are highly"social" and may be better adapted for communication).