We present a new approach for online incremental word acquisition and grammar learning by humanoid robots. Using no data set provided in advance, the proposed system grounds language in a physical context, as mediated by its perceptual capacities. It is carried out using show-and-tell procedures, interacting with its human partner. Moreover, this procedure is open-ended for new words and multiword utterances. These facilities are supported by a self-organizing incremental neural network, which can execute online unsupervised classification and topology learning. Embodied with a mental imagery, the system also learns by both top-down and bottom-up processes, which are the syntactic structures that are contained in utterances. Thereby, it performs simple grammar learning. Under such a multimodal scheme, the robot is able to describe online a given physical context (both static and dynamic) through natural language expressions. It can also perform actions through verbal interactions with its human partner.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/tsmcb.2007.903447 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!