Publications by authors named "Jakub Szymanik"

In this paper, we investigate, by means of a computational model, how individuals map quantifiers onto numbers and how they order quantifiers on a mental line. We selected five English quantifiers (few, fewer than half, many, more than half, and most) which differ in truth conditions and vagueness. We collected binary truth value judgment data in an online quantifier verification experiment.

View Article and Find Full Text PDF

Human languages vary in terms of which meanings they lexicalize, but this variation is constrained. It has been argued that languages are under two competing pressures: the pressure to be simple (e.g.

View Article and Find Full Text PDF

One approach to understanding how the human cognitive system stores and operates with quantifiers such as "some," "many," and "all" is to investigate their interaction with the cognitive mechanisms for estimating and comparing quantities from perceptual input (i.e., nonsymbolic quantities).

View Article and Find Full Text PDF

According to the Language of Thought Hypothesis (LoTH), an influential account in philosophy and cognitive science, human cognition is underlain by symbolic reasoning in a formal language. In this account, concepts are expressions in a Language of Thought, deduction is syntactic manipulation in this language, and learning is an inference of expressions in this language from data. This picture raises the question of what LoT humans have, and how to infer it from behavior.

View Article and Find Full Text PDF

According to logical theories of meaning, a meaning of an expression can be formalized and encoded in truth conditions. Vagueness of the language and individual differences between people are a challenge to incorporate into the meaning representations. In this paper, we propose a new approach to study truth-conditional representations of vague concepts.

View Article and Find Full Text PDF

Despite wide variation among natural languages, there are linguistic properties thought to be universal to all or nearly all languages. Here, we consider universals at the semantic level, in the domain of quantifiers, which are given by the properties of monotonicity, quantity, and conservativity, and we investigate whether these universals might be explained by differences in complexity. First, we use a minimal pair methodology and compare the complexities of individual quantifiers using approximate Kolmogorov complexity.

View Article and Find Full Text PDF

The language of thought hypothesis and connectionism provide two main accounts of category acquisition in the cognitive sciences. However, it is unclear to what extent their predictions agree. In this article, we tackle this problem by comparing the two accounts with respect to a common set of predictions about the effort required to acquire categories.

View Article and Find Full Text PDF

The pattern of implicatures of the modified numeral "more than n" depends on the roundness of n. Cummins et al. (2012) present experimental evidence for the relation between roundness and implicature patterns and propose a pragmatic account of the phenomenon.

View Article and Find Full Text PDF

The vocabulary of human languages has been argued to support efficient communication by optimizing the trade-off between simplicity and informativeness. The argument has been originally based on cross-linguistic analyses of vocabulary in semantic domains of content words, such as kinship, color, and number terms. The present work applies this analysis to a category of function words: indefinite pronouns (e.

View Article and Find Full Text PDF

Different classes of quantifiers provably require different verification algorithms with different complexity profiles. The algorithm for proportional quantifiers, like 'most', is more complex than that for nonproportional quantifiers, like 'all' and 'three'. We tested the hypothesis that different complexity profiles affect ERP responses during sentence verification, but not during sentence comprehension.

View Article and Find Full Text PDF

Natural languages exhibit many semantic universals, that is, properties of meaning shared across all languages. In this paper, we develop an explanation of one very prominent semantic universal, the monotonicity universal. While the existing work has shown that quantifiers satisfying the monotonicity universal are easier to learn, we provide a more complete explanation by considering the emergence of quantifiers from the perspective of cultural evolution.

View Article and Find Full Text PDF

Semantic universals are properties of meaning shared by the languages of the world. We offer an explanation of the presence of such universals by measuring simplicity in terms of ease of learning, showing that expressions satisfying universals are simpler than those that do not according to this criterion. We measure ease of learning using tools from machine learning and analyze universals in a domain of function words (quantifiers) and content words (color terms).

View Article and Find Full Text PDF

Theory of mind refers to the human capacity for reasoning about others' mental states based on observations of their actions and unfolding events. This type of reasoning is notorious in the cognitive science literature for its presumed computational intractability. A possible reason could be that it may involve higher-order thinking (e.

View Article and Find Full Text PDF

The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g., "More than half of the dots are blue").

View Article and Find Full Text PDF

Unlabelled: We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only with proportional quantifiers, like more than half.

View Article and Find Full Text PDF

Human intentional communication is marked by its flexibility and context sensitivity. Hypothesized brain mechanisms can provide convincing and complete explanations of the human capacity for intentional communication only insofar as they can match the computational power required for displaying that capacity. It is thus of importance for cognitive neuroscience to know how computationally complex intentional communication actually is.

View Article and Find Full Text PDF

We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality.

View Article and Find Full Text PDF