Inferential Transmission of Language [notes on cognitive science]

11May09

Kenny Smith is one of my computational linguistics professors in Edinburgh University. He is great at guiding you through complex mathematical concepts related to language, with a northern accent and a laid-back attitude. On his series of papers about linguistic models of language acquisition, he covers a very wide range of issues in the evolution and transfer of language through cultural interaction and with the help of some learning biases. He made models of language generation, acquisition, maintenance and creation. These proved ecologically relevant, since they predicted (or followed) the findings from empirical studies on developmental psychology. On his paper “Inferential Transmission of Language” from 2005, he explores the very interesting issues of Lexical acquisition in infants and provides computational models to simulate them. This article will provide a short, read-me-before-the-exam synopsis and evaluation of the main issues Kenny brings forward.

First of all, it is important to acknowledge what we mean when we’re uttering a word. Smith uses an example from Quine (1960), where an anthropologist sees a native pointing at a rabbit whilst uttering ‘‘gavagai”. This could mean anything, from RABBIT, ANIMAL, DINNER, YOU-LOOK-LIKE-HIM, UNDETACHED RABBIT PARTS, or actually an infinite number of alternatives. However, it turns out that children learn instantaneously (i.e. after 1 instance of usage) novel words, from very spurious input. There are 3 psychological accounts of how children overcome this uncertainty, namely Representational, interpretational and social constraints.

Representational constraints arise from the fact that children usually represent their environment in terms of whole objects. Thus, they assume that any new word will refer to a whole object. But further investigation shows that children can be said to have innate biases that drive their attention towards specific features of objects such as shape or texture, in terms of how advantageous a perceptual and categorial difference that particular feature yields.

Interpretational constraints reflect the fact that sometime during development, representational biases have to be overcome. Markman’s put forward the assumption of mutual exclusivity where words do not share referents. This is most used in early lexical acquisition and quickly fades away. Clark proposes the notion of contrast, where children assign novel words to gaps in their lexicon, coining new ones if necessary.

Finally, Social constraints have been boxed up in a common theory of mind . In this, children as young as 1 start recognising adults as intentional agents and follow their gaze and where they point. They can choose to attend to specific objects or situations, and they can have joint attention with an adult. This is likely to reduce the ambiguity when interpreting a signal. This drives cognitive development in an iterative way and makes linguistic symbols necessary (according to Tomasello).

Kenny goes on to explore the implications of modelling the inference of meaning. Firstly, he explains the concept of a language organ as being composed of the Universal Grammar and the Language Acquisition Device (from Chomsky). This framework claims that the language organ evolved incrementally under the rules and pressure of natural selection. He also considers an alternative view, one based on cultural transmission called the Expression/Induction models (also Iterated Learning Models). On this framework individuals express external linguistic behavior based on internal representation and induce internal representations based on external behavior. This means that individuals are always trying to learn language off others in their community, The key here is a Transmission Bottleneck which limits the amount of linguistic experience any individual is exposed to.

But how do we infer meaning? Is meaning mere pre-defined entities encapsulated within signals? What about all the signal redundancy and the lack of semantic variation? If meanings are transferred telepathically, he argues, then any signal used can’t be said to convey any meaning. If they are devoid of meaning, then their existence is redundant and speakers are wasting energy, something not in line with natural selection! What about variability?

In his paper, Smith introduces a E/I framework model, which accounts for the majority of issues exposed above. It features inference of meaning through experience and includes an external world, an agent-specific internal representation and a set of publicly transmittable signals. The world of the model contains objects described by a feature vector of numbers on the range [0,1]. Agents categorise objects in terms of features, through discrimination games, where they are prompted to differentiate one object from a “context set”, by searching for a distinctive category that singles out the target. Failure triggers meaning creation. Once there is a meaning representation, agents communicate about the objects, using the category choses in the discrimination game.  The agents choose the signal that would be easiest to interpret for the hearer given the current context and its own semantic interpretation. They transmit the signal plus the context. The hearer interprets the signal and learns the meaning, solely from the current context and previous experience. The hearer plays the discrimination game with the signal received, creating a list of semantic hypotheses and assigning each a probability. Then, it chooses the most probable meaning. The model put forward by Smith shows that lexical acquisition can take place successfully without explicit feedback, by using cross-situational statistical learning.

Cross-situational Statistical Learning is based on the co-occurrence of words and their inferred meanings. Uncertainty of reference is reduced upon sequential discrimination games by taking into account the previous experience of co-occurrences of words and features. This provides a robust account of lexical acquisition, and indeed it has been shown in studies that children use cross-situational learning to disambiguate word reference.

Smith explores how language change is driven by variation in language communities. He identifies two sources of variation, namely conceptual and lexical variation. Conceptual variation can be expressed as the similarity between two conceptual trees (the internal semantic representation of concepts by agents). Lexical variation can be measured by noticing whether agents have the same preferred word for each meaning. Children thus might assign concepts to a node closer to the root of the conceptual tree, producing a change called generalisation. Lexical items persist if they are successfully learnt. Persistence is useful to measure linguistic change across generations.

In order to measure linguistic change across generations, Smith extends the classic inferential model into an iterated learning one, with generational turnover. Each generation has a orientation (learning) phase followed by a communication phase. At the end of each of this generations, one individual at random is removed and replaced by a new learner that will have to acquire the language. The learner has to communicate with an adult throughout 5 communicative episodes. While the experiments showed that language change occurs very rapidly, communication is seldom affected.

So why isn’t communication affected if language changes so rapidly? Well as Burling points out, the first communicative episode was not triggered by a speaker making an intentional signal but rather by a hearer interpreting some behaviour as a signal. Smith believes that the existence of communication is defined by an interpretative intent. The signal needn’t even be intentional. Someone, giving a cry of excitement on finding a strawberry provides information to others, when a hearer associates the cry with the presence of the fruit: communication is founded.

When inference began, we started becoming better at hearing. Our interpretative capabilities became crucial for survival. This inferential nature allowed for the evolution of language. This seems more plausible than a code-based approach. If we see language as a code which both parties have to share, it is harder to conceive a way in which evolution might have made it better. An inference approach allows taking the context into account when a communication breakdown occurs. More complex meanings for the same signal can arise and if they are beneficial for the people who understand them, the capacity to infer more complex structures will become evolutionarily advantageous. This is a very nice way to explain the emergence of language in our species.

So, language is a culturally transmitted system of communication, based on inference of meaning. Smith presents very convincing arguments paired with a very plausible model that concurs with the psychological account for language development. The inherent uncertainty of meaning inference leads to variation both in conceptual and lexical structure, changing very rapidly across generations, but retaining communicative utility. Language changes in a few generations, however, everyone who is alive at any time understands everyone else she can talk to. With this view we can also explain how language originated: by someone inferring a potentially involuntary action with the existence of a beneficial situation.

This guy’s a genius.

Anuncios


One Response to “Inferential Transmission of Language [notes on cognitive science]”

  1. 1 Sofie

    wow/ yes, wow.


Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s


A %d blogueros les gusta esto: