In visual word identification, readers automatically access word internal information: they recognize orthographically embedded words (e.g., HAT in THAT) and are sensitive to morphological structure (DEAL-ER, BASKET-BALL). The exact mechanisms that govern these processes, however, are not well established yet - how is this information used? What is the role of affixes in this process? To address these questions, we tested the activation of meaning of embedded word stems in the presence or absence of a morphological structure using two semantic categorization tasks in Italian. Participants made category decisions on words (e.g., is CARROT a type of food?). Some no-answers (is CORNER a type of food?) contained category-congruent embedded word stems (i.e., CORN-). Moreover, the embedded stems could be accompanied by a pseudo-suffix (-er in CORNER) or a non-morphological ending (-ce in PEACE) - this allowed gauging the role of pseudo-suffixes in stem activation. The analyses of accuracy and response times revealed that words were harder to reject as members of a category when they contained an embedded word stem that was indeed category-congruent. Critically, this was the case regardless of the presence or absence of a pseudo-suffix. These findings provide evidence that the lexical identification system activates the meaning of embedded word stems when the task requires semantic information. This study brings together research on orthographic neighbors and morphological processing, yielding results that have important implications for models of visual word processing.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3758/s13423-019-01664-z | DOI Listing |
Background: Investigators and funding organizations desire knowledge on topics and trends in publicly funded research but current efforts for manual categorization have been limited in breadth and depth of understanding.
Purpose: We present a semi-automated analysis of 21 years of R-type National Cancer Institute (NCI) grants to departments of radiation oncology and radiology using natural language processing (NLP).
Methods: We selected all non-education R-type NCI grants from 2000 to 2020 awarded to departments of radiation oncology/radiology with affiliated schools of medicine.
Sci Rep
January 2025
Nanfang College Guangzhou, Guangzhou, 510970, China.
Named Entity Recognition (NER) is an essential component of numerous Natural Language Processing (NLP) systems, with the aim of identifying and classifying entities that have specific meanings in raw text, such as person (PER), location (LOC), and organization (ORG). Recently, Deep Neural Networks (DNNs) have been extensively applied to NER tasks owing to the rapid development of deep learning technology. However, despite their advancements, these models fail to take full advantage of the multi-level features (e.
View Article and Find Full Text PDFR Soc Open Sci
January 2025
School of Physics, The University of Sydney, Sydney, Australia.
Clustering short text is a difficult problem, owing to the low word co-occurrence between short text documents. This work shows that large language models (LLMs) can overcome the limitations of traditional clustering approaches by generating embeddings that capture the semantic nuances of short text. In this study, clusters are found in the embedding space using Gaussian mixture modelling.
View Article and Find Full Text PDFMethods Inf Med
January 2025
Artificial Intelligence Lab, Mimos Berhad, Kuala Lumpur, Malaysia.
Objective: This is the first Malaysian machine learning model to detect and disambiguate abbreviations in clinical notes. The model has been designed to be incorporated into MyHarmony, a Natural Language Processing system, that extracts clinical information for healthcare management. The model utilizes word embedding to ensure feasibility of use, not in real-time but for secondary analysis, within the constraints of low-resource settings.
View Article and Find Full Text PDFISA Trans
January 2025
State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai 200240, China. Electronic address:
This paper addresses the critical challenge of interpretability in machine learning methods for machine fault diagnosis by introducing a novel ad hoc interpretable neural network structure called Sparse Temporal Logic Network (STLN). STLN conceptualizes network neurons as logical propositions and constructs formal connections between them using specified logical operators, which can be articulated and understood as a formal language called Weighted Signal Temporal Logic. The network includes a basic word network using wavelet kernels to extract intelligible features, a transformer encoder with sparse and structured neural attention to locate informative signal segments relevant to decision-making, and a logic network to synthesize a coherent language for fault explanation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!