How does attentional modulation of neural activity enhance performance? Here we use a deep convolutional neural network as a large-scale model of the visual system to address this question. We model the feature similarity gain model of attention, in which attentional modulation is applied according to neural stimulus tuning. Using a variety of visual tasks, we show that neural modulations of the kind and magnitude observed experimentally lead to performance changes of the kind and magnitude observed experimentally. We find that, at earlier layers, attention applied according to tuning does not successfully propagate through the network, and has a weaker impact on performance than attention applied according to values computed for optimally modulating higher areas. This raises the question of whether biological attention might be applied at least in part to optimize function rather than strictly according to tuning. We suggest a simple experiment to distinguish these alternatives.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6207429 | PMC |
http://dx.doi.org/10.7554/eLife.38105 | DOI Listing |
Cardiovasc Revasc Med
January 2025
Weatherhead PET Imaging Center, Division of Cardiology, Department of Medicine, McGovern Medical School at UTHealth and Memorial Hermann Hospital, Houston, TX, United States of America.
Patients with angina but without obstructive epicardial coronary disease still require a specific mechanistic diagnosis to enable targeted treatment. The overarching term "coronary microvascular dysfunction" (CMD) has been applied broadly - but is it correct? We present a series of case examples culminating a systematic exploration of our large clinical database to distinguish among four categories of coronary pathophysiology. First, by far the largest group of "no stenosis angina" patients exhibits subendocardial ischemia during intact flow through diffuse epicardial disease during dipyridamole vasodilator stress.
View Article and Find Full Text PDFComput Biol Med
January 2025
Department of Pathology, Peking University Health Science Center, 38 College Road, Haidian, Beijing, 100191, China; Department of Pathology, School of Basic Medical Sciences, Third Hospital, Peking University Health Science Center, Beijing, 100191, China. Electronic address:
Background: Ovarian cancer is among the most lethal gynecologic malignancy that threatens women's lives. Pathological diagnosis is a key tool for early detection and diagnosis of ovarian cancer, guiding treatment strategies. The evaluation of various ovarian cancer-related cells, based on morphological and immunohistochemical pathology images, is deemed an important step.
View Article and Find Full Text PDFNeuroimage
January 2025
Department of Computer Science, University of Innsbruck, Technikerstrasse 21a, Innsbruck, 6020, Austria. Electronic address:
The objective of this study is to assess the potential of a transformer-based deep learning approach applied to event-related brain potentials (ERPs) derived from electroencephalographic (EEG) data. Traditional methods involve averaging the EEG signal of multiple trials to extract valuable neural signals from the high noise content of EEG data. However, this averaging technique may conceal relevant information.
View Article and Find Full Text PDFMed Oral Patol Oral Cir Bucal
January 2025
Hospital Universitario "Dr. José Eleuterio González" Av. Dr. José Eleuterio González 235, Mitras Centro 64460 Monterrey, Mexico
Background: Craniofacial mucormycosis is a highly lethal infectious disease. This study aims to assess and analyze multiple variables, including clinical, socioeconomic, and biochemical markers, to identify and examine risk factors for mortality associated with this mycotic infection.
Material And Methods: A retrospective analysis was conducted on 38 patients who sought medical attention at the Otolaryngology and Head and Neck Surgery Division of a tertiary-level hospital in Monterrey, Mexico.
Sci Rep
January 2025
Department of Electrical Power, Adama Science and Technology University, Adama, 1888, Ethiopia.
Although the Transformer architecture has established itself as the industry standard for jobs involving natural language processing, it still has few uses in computer vision. In vision, attention is used in conjunction with convolutional networks or to replace individual convolutional network elements while preserving the overall network design. Differences between the two domains, such as significant variations in the scale of visual things and the higher granularity of pixels in images compared to words in the text, make it difficult to transfer Transformer from language to vision.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!