Publications by authors named "Ray Jackendoff"

The Parallel Architecture is a conception of the organization of the mental representations involved in language and of the role of language in the mind as a whole. Its basic premise is that linguistic representations draw on three independent generative systems-phonological, syntactic, and semantic structures-plus a system of interface links by which they communicate with each other. In particular, words serve as partial interface links that govern the way they compose into novel sentences.

View Article and Find Full Text PDF

A recent trend in psycholinguistic research has been to posit prediction as an essential function of language processing. The present paper develops a linguistic perspective on viewing prediction in terms of pre-activation. We describe what predictions are and how they are produced.

View Article and Find Full Text PDF

Relational morphology (RM) is a novel approach to word structure that bears a close relation to construction grammar (CxG). Based on the parallel architecture framework, its basic question is: what linguistic entities are stored in long-term memory, and in what form? Like CxG, RM situates the "rules of grammar" in an extended lexicon, right along with words, multiword expressions such as idioms and collocations, and meaningful syntactic constructions. However, its notion of enriches CxG's notion of in a number of respects, including (a) the possibility of purely formal schemas that lack meaning, (b) a more precise way of specifying relations among lexical items than standard inheritance, (c) the possibility of "horizontal" relations between individual words and between schemas, (d) a clearer characterization of the distinction between productive and nonproductive phenomena, and (e) more explicit integration with theories of language processing and of other domains of cognition.

View Article and Find Full Text PDF

We investigate how predicates expressing symmetry, asymmetry and non-symmetry are encoded in a newly emerging sign language, Central Taurus Sign Language (CTSL). We find that predicates involving symmetry (i.e.

View Article and Find Full Text PDF

Framed in psychological terms, the basic question of linguistic theory is what is stored in memory, and in what form. Traditionally, what is stored is divided into grammar and lexicon, where grammar contains the rules and the lexicon is an unstructured list of exceptions. We develop an alternative view in which rules of grammar are simply lexical items that contain variables, and in which rules have two functions.

View Article and Find Full Text PDF

There is ample evidence that speakers' linguistic knowledge extends well beyond what can be described in terms of rules of compositional interpretation stated over combinations of single words. We explore a range of multiword constructions (MWCs) to get a handle both on the extent of the phenomenon and on the grammatical constraints that may govern it. We consider idioms of various sorts, collocations, compounds, light verbs, syntactic nuts, and assorted other constructions, as well as morphology.

View Article and Find Full Text PDF

We suggest that one way to approach the evolution of language is through reverse engineering: asking what components of the language faculty could have been useful in the absence of the full complement of components. We explore the possibilities offered by linear grammar, a form of language that lacks syntax and morphology altogether, and that structures its utterances through a direct mapping between semantics and phonology. A language with a linear grammar would have no syntactic categories or syntactic phrases, and therefore no syntactic recursion.

View Article and Find Full Text PDF

Formal theories of mental representation have receded from the importance they had in the early days of cognitive science. I argue that such theories are crucial in any mental domain, not just for their own sake, but to guide experimental inquiry, as well as to integrate the domain into the mind as a whole. To illustrate the criteria of adequacy for theories of mental representation, I compare two theoretical approaches to language: classical generative grammar (Chomsky, 1965, 1981, 1995) and the parallel architecture (Jackendoff, 1997, 2002).

View Article and Find Full Text PDF

Constituent structure has long been established as a central feature of human language. Analogous to how syntax organizes words in sentences, a narrative grammar organizes sequential images into hierarchic constituents. Here we show that the brain draws upon this constituent structure to comprehend wordless visual narratives.

View Article and Find Full Text PDF

We used event-related potentials (ERPs) to investigate the neurocognitive mechanisms associated with processing light verb constructions such as "give a kiss". These constructions consist of a semantically underspecified light verb ("give") and an event nominal that contributes most of the meaning and also activates an argument structure of its own ("kiss"). This creates a mismatch between the syntactic constituents and the semantic roles of a sentence.

View Article and Find Full Text PDF

The verb "pounce" describes a single, near-instantaneous event. Yet, we easily understand that, "For several minutes the cat pounced…" describes a situation in which multiple pounces occurred, although this interpretation is not overtly specified by the sentence's syntactic structure or by any of its individual words--a phenomenon known as "aspectual coercion." Previous psycholinguistic studies have reported processing costs in association with aspectual coercion, but the neurocognitive mechanisms giving rise to these costs remain contentious.

View Article and Find Full Text PDF

David Marr's metatheory emphasized the importance of what he called the computational level of description--an analysis of the task the visual system performs. In the present article I argue that this task should be conceived of not just as object recognition but as spatial understanding, and that the mental representations responsible for spatial understanding are not exclusively visual in nature. In particular, a theory of the visual system must interact with a theory of the language faculty to explain how we talk about what we see--and how we see all the things we talk about as though they are part of the perceived world.

View Article and Find Full Text PDF

Just as syntax differentiates coherent sentences from scrambled word strings, the comprehension of sequential images must also use a cognitive system to distinguish coherent narrative sequences from random strings of images. We conducted experiments analogous to two classic studies of language processing to examine the contributions of narrative structure and semantic relatedness to processing sequential images. We compared four types of comic strips: (1) Normal sequences with both structure and meaning, (2) Semantic Only sequences (in which the panels were related to a common semantic theme, but had no narrative structure), (3) Structural Only sequences (narrative structure but no semantic relatedness), and (4) Scrambled sequences of randomly-ordered panels.

View Article and Find Full Text PDF

This study examined the electrophysiological correlates of complement coercion. ERPs were measured as participants read and made acceptability judgments about plausible coerced sentences, plausible noncoerced sentences, and highly implausible animacy-violated sentences ("The journalist began/wrote/astonished the article before his coffee break"). Relative to noncoerced complement nouns, the coerced nouns evoked an N400 effect.

View Article and Find Full Text PDF

This article sketches the Parallel Architecture, an approach to the structure of grammar that contrasts with mainstream generative grammar (MGG) in that (a) it treats phonology, syntax, and semantics as independent generative components whose structures are linked by interface rules; (b) it uses a parallel constraint-based formalism that is nondirectional; (c) it treats words and rules alike as pieces of linguistic structure stored in long-term memory. In addition to the theoretical advantages offered by the Parallel Architecture, it lends itself to a direct interpretation in processing terms, in which pieces of structure stored in long-term memory are assembled in working memory, and alternative structures are in competition. The resulting model of processing is compared both with processing models derived from MGG and with lexically driven connectionist architectures.

View Article and Find Full Text PDF

What roles do syntax and semantics have in the grammar of a language? What are the consequences of these roles for syntactic structure, and why does it matter? We sketch the Simpler Syntax Hypothesis, which holds that much of the explanatory role attributed to syntax in contemporary linguistics is properly the responsibility of semantics. This rebalancing permits broader coverage of empirical linguistic phenomena and promises a tighter integration of linguistic theory into the cognitive scientific enterprise. We suggest that the general perspective of the Simpler Syntax Hypothesis is well suited to approaching language processing and language evolution, and to computational applications that draw upon linguistic insights.

View Article and Find Full Text PDF

We explore the capacity for music in terms of five questions: (1) What cognitive structures are invoked by music? (2) What are the principles that create these structures? (3) How do listeners acquire these principles? (4) What pre-existing resources make such acquisition possible? (5) Which aspects of these resources are specific to music, and which are more general? We examine these issues by looking at the major components of musical organization: rhythm (an interaction of grouping and meter), tonal organization (the structure of melody and harmony), and affect (the interaction of music with emotion). Each domain reveals a combination of cognitively general phenomena, such as gestalt grouping principles, harmonic roughness, and stream segregation, with phenomena that appear special to music and language, such as metrical organization. These are subtly interwoven with a residue of components that are devoted specifically to music, such as the structure of tonal systems and the contours of melodic tension and relaxation that depend on tonality.

View Article and Find Full Text PDF

We examine the question of which aspects of language are uniquely human and uniquely linguistic in light of recent suggestions by Hauser, Chomsky, and Fitch that the only such aspect is syntactic recursion, the rest of language being either specific to humans but not to language (e.g. words and concepts) or not specific to humans (e.

View Article and Find Full Text PDF

The goal of this study is to reintegrate the theory of generative grammar into the cognitive sciences. Generative grammar was right to focus on the child's acquisition of language as its central problem, leading to the hypothesis of an innate Universal Grammar. However, generative grammar was mistaken in assuming that the syntactic component is the sole course of combinatoriality, and that everything else is "interpretive.

View Article and Find Full Text PDF