Children learn a variety of verbs for hand actions starting in their second year of life. The semantic distinctions can be subtle, and they vary across languages, yet they are learned quickly. How is this possible? The hypothesis explored in this talk is that to explain the acquisition and use of action verbs, motor control must be taken into account. In particular, I'll present a model--based on the principles of neural computation in general and on the human motor system in particular--which takes a set of (action, verb) pairings and learns both to label novel actions and to obey verbal commands. A key feature of the model is the executing schema, an active controller mechanism which, by actually driving behavior, allows the model to carry out verbal commands. A hard-wired mechanism links the activity of executing schemas to a set of features which have proven linguistically relevant, including hand posture, joint motions, force, aspect and goals. The feature set is relatively small and is fixed, helping to make learning tractable. Moreover, the use of traditional feature structures facilitates the use of model merging, a Bayesian probabilistic learning algorithm which displays a number of desirable properties, including rapid learning of plausible word meanings, automatic determination of an appropriate number of senses for each verb, and a plausible mapping to a connectionist recruitment learning architecture. The learning algorithm is demonstrated on a handful (so to speak) of English verbs, and also proves capable of making some interesting distinctions found crosslinguistically. I hope to present the technical ideas in a way which is accessible to the general cognitive science community.
|By David Baileyemail@example.com|