Scientific discoveries often hinge on synthesizing decades of research, a task that potentially outstrips human information processing capacities. Large language models (LLMs) offer a solution. LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts.
View Article and Find Full Text PDFHuman language relies on a rich cognitive machinery, partially shared with other animals. One key mechanism, however, decomposing events into causally linked agent-patient roles, has remained elusive with no known animal equivalent. In humans, agent-patient relations in event cognition drive how languages are processed neurally and expressions structured syntactically.
View Article and Find Full Text PDF