Determining the extent to which the perceptual world can be recovered from language is a longstanding problem in philosophy and cognitive science. We show that state-of-the-art large language models can unlock new insights into this problem by providing a lower bound on the amount of perceptual information that can be extracted from language. Specifically, we elicit pairwise similarity judgments from GPT models across six psychophysical datasets.
View Article and Find Full Text PDFProsodic stresses are known to affect the meaning of utterances, but exactly how they do this is not known in many cases. We focus on the mechanisms underlying the meaning effects of ironic prosody (e.g.
View Article and Find Full Text PDFThe existence of a mapping between emotions and speech prosody is commonly assumed. We propose a Bayesian modelling framework to analyse this mapping. Our models are fitted to a large collection of intended emotional prosody, yielding more than 3,000 minutes of recordings.
View Article and Find Full Text PDF