Syntactic bootstrapping is based on the premise that there are probabilistic correspondences between the syntactic structure in which a word occurs and the word's meaning, and that such links hold, with some degree of generality, cross-linguistically. The procedure has been extensively discussed with respect to verbs, where it has been proposed as a mechanism for constraining the massive ambiguity that arises when inferring the meaning of a verb that is used to describe an event (Fisher, Hall, Rakowitz & Gleitman, 1994; Gleitman, 1990; Gleitman, Cassidy, Nappa, Papafragou & Trueswell, 2005). In her keynote paper (Hacquard, 2022), Hacquard focuses on classes of verbs for which inferences about meaning are arguably even harder, because they involve concepts that have no observable counterparts: these are attitude verbs such as and , and modals such as and She walks us through, in meticulous detail, the limits of a purely syntactic bootstrapping mechanism, and she describes how augmenting syntactic information with pragmatic information, via pragmatic syntactic bootstrapping (Hacquard, 2022; Hacquard & Lidz, 2019), might address these limitations.
View Article and Find Full Text PDFMany events that humans and other species experience contain regularities in which certain elements within an event predict certain others. While some of these regularities involve tracking the co-occurrences between temporally adjacent stimuli, others involve tracking the co-occurrences between temporally distant stimuli (i.e.
View Article and Find Full Text PDFSeven month old infants can learn simple repetition patterns, such as we-fo-we, and generalize the rules to sequences of new syllables, such as ga-ti-ga. However, repetition rule learning in visual sequences seems more challenging, leading some researchers to claim that this type of rule learning applies preferentially to communicative stimuli. Here we demonstrate that 9-month-old infants can learn repetition rules in sequences of non-communicative dynamic human actions.
View Article and Find Full Text PDFLearning and processing natural language requires the ability to track syntactic relationships between words and phrases in a sentence, which are often separated by intervening material. These nonadjacent dependencies can be studied using artificial grammar learning paradigms and structured sequence processing tasks. These approaches have been used to demonstrate that human adults, infants and some nonhuman animals are able to detect and learn dependencies between nonadjacent elements within a sequence.
View Article and Find Full Text PDFA large body of research has demonstrated that humans attend to adjacent co-occurrence statistics when processing sequential information, and bottom-up prosodic information can influence learning. In this study, we investigated how top-down grouping cues can influence statistical learning. Specifically, we presented English sentences that were structurally equivalent to each other, which induced top-down expectations of grouping in the artificial language sequences that immediately followed.
View Article and Find Full Text PDFMuch of the statistical learning literature has focused on adjacent dependency learning, which has shown that learners are capable of extracting adjacent statistics from continuous language streams. In contrast, studies on non-adjacent dependency learning have mixed results, with some showing success and others failure. We review the literature on non-adjacent dependency learning and examine various theories proposed to account for these results, including the proposed necessity of the presence of pauses in the learning stream, or proposals regarding competition between adjacent and non-adjacent dependency learning such that high variability of middle elements is beneficial to learning.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
April 2018
The structure of natural languages give rise to many dependencies in the linear sequences of words, and within words themselves. Detecting these dependencies is arguably critical for young children in learning the underlying structure of their language. There is considerable evidence that human adults and infants are sensitive to the statistical properties of sequentially adjacent items.
View Article and Find Full Text PDFJ Exp Psychol Gen
December 2017
Because of the hierarchical organization of natural languages, words that are syntactically related are not always linearly adjacent. For example, the subject and verb in the child always runs agree in person and number, although they are not adjacent in the sequences of words. Since such dependencies are indicative of abstract linguist structure, it is of significant theoretical interest how these relationships are acquired by language learners.
View Article and Find Full Text PDFA critical part of infants' ability to acquire any language involves segmenting continuous speech input into discrete word forms. Certain properties of words could provide infants with reliable cues to word boundaries. Here we investigate the potential utility of vowel harmony (VH), a phonological property whereby vowels within a word systematically exhibit similarity ("harmony") for some aspect of the way they are pronounced.
View Article and Find Full Text PDFWord learning involves massive ambiguity, since in a particular encounter with a novel word, there are an unlimited number of potential referents. One proposal for how learners surmount the problem of ambiguity is that learners use cross-situational statistics to constrain the ambiguity: When a word and its referent co-occur across multiple situations, learners will associate the word with the correct referent. Yu and Smith (2007) propose that these co-occurrence statistics are sufficient for word-to-referent mapping.
View Article and Find Full Text PDFChristiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.
View Article and Find Full Text PDFIn many languages, declaratives and interrogatives differ in word order properties, and in syntactic organization more broadly. Thus, in order to learn the distinct syntactic properties of the two sentence types, learners must first be able to distinguish them using non-syntactic information. Prosodic information is often assumed to be a useful basis for this type of discrimination, although no systematic studies of the prosodic cues available to infants have been reported.
View Article and Find Full Text PDFGrammatical categories, such as noun and verb, are the building blocks of syntactic structure and the components that govern the grammatical patterns of language. However, in many languages words are not explicitly marked with their category information, hence a critical part of acquiring a language is categorizing the words. Computational analyses of child-directed speech have shown that distributional information-information about how words pattern with one another in sentences-could be a useful source of initial category information.
View Article and Find Full Text PDFNonaccidental properties (NAPs) are image properties that are invariant over orientation in depth and allow facile recognition of objects at varied orientations. NAPs are distinguished from metric properties (MPs) that generally vary continuously with changes in orientation in depth. While a number of studies have demonstrated greater sensitivity to NAPs in human adults, pigeons, and macaque IT cells, the few studies that investigated sensitivities in preschool children did not find significantly greater sensitivity to NAPs.
View Article and Find Full Text PDFFront Psychol
February 2013
In most human languages, important components of linguistic structure are carried by affixes, also called bound morphemes. The affixes in a language comprise a relatively small but frequently occurring set of forms that surface as parts of words, but never occur without a stem. They combine productively with word stems and other grammatical entities in systematic and predictable ways.
View Article and Find Full Text PDFMintz (2003) described a distributional environment called a frame, defined as the co-occurrence of two context words with one intervening target word. Analyses of English child-directed speech showed that words that fell within any frequently occurring frame consistently belonged to the same grammatical category (e.g.
View Article and Find Full Text PDFOver the past couple of decades, research has established that infants are sensitive to the predominant stress pattern of their native language. However, the degree to which the stress pattern shapes infants' language development has yet to be fully determined. Whether stress is merely a cue to help organize the patterns of speech or whether it is an important part of the representation of speech sound sequences has still to be explored.
View Article and Find Full Text PDFDev Psychol
January 2005
Two hundred forty English-speaking toddlers (24- and 36-month-olds) heard novel adjectives applied to familiar objects (Experiment 1) and novel objects (Experiment 2). Children were successful in mapping adjectives to target properties only when information provided by the noun, in conjunction with participants' knowledge of the objects, provided coherent category information: when basic-level nouns or superordinate-level nouns were used with familiar objects, when novel basic-level nouns were used with novel objects, and--for 36-month-olds--when the nouns were underspecified with respect to category (thing or one) but participants could nonetheless infer a category from pragmatic and conceptual knowledge. These results provide evidence concerning how nouns influence adjective learning, and they support the notion that toddlers consider pragmatic factors when learning new words.
View Article and Find Full Text PDFCognition
November 2003
This paper introduces the notion of frequent frames, distributional patterns based on co-occurrence patterns of words in sentences, then investigates the usefulness of this information in grammatical categorization. A frame is defined as two jointly occurring words with one word intervening. Qualitative and quantitative results from distributional analyses of six different corpora of child directed speech are presented in two experiments.
View Article and Find Full Text PDFThe ability to identify the grammatical category of a word (e.g., noun, verb, adjective) is a fundamental aspect of competence in a natural language.
View Article and Find Full Text PDFBy 24 months, most children spontaneously and correctly use adjectives. Yet prior laboratory research that has studied lexical acquisition in young children reports that children up to 3-years-old map novel adjectives to object properties only in very limited situations (Child Development 59 (1988) 411; Child Development 64 (1993) 1651; Child Development 71 (2000) 649; Developmental Psychology 36 (2000) 571; Child Development 69 (1998) 1313). In Experiments 1 and 2 we introduced 36-month-olds (Experiment 1) and 24-month-olds (Experiment 2) to novel adjectives while providing rich referential and syntactic information to indicate what the novel words mean.
View Article and Find Full Text PDF