Redundancy masking is the reduction of the perceived number of items in repeating patterns. It shares a number of characteristics with crowding, the impairment of target identification in visual clutter. Crowding strongly depends on the location of the target in the visual field. For example, it is stronger in the upper compared to the lower visual field and is usually weakest on the horizontal meridian. This pattern of visual field asymmetries is common in spatial vision, as revealed by tasks measuring, for example, spatial resolution and contrast sensitivity. Here, to characterize redundancy masking and reveal its similarities to and differences from other spatial tasks, we investigated whether redundancy masking shows the same typical visual field asymmetries. Observers were presented with three to six radially arranged lines at 10° eccentricity at one of eight locations around fixation and were asked to report the number of lines. We found asymmetries that differed pronouncedly from those found in crowding. Redundancy masking did not differ between upper and lower visual fields. Importantly, redundancy masking was stronger on the horizontal meridian than on the vertical meridian, the opposite of what is usually found in crowding. These results show that redundancy masking diverges from crowding in regard to visual field asymmetries, suggesting different underlying mechanisms of redundancy masking and crowding. We suggest that the observed atypical visual field asymmetries in redundancy masking are due to the superior extraction of regularity and a more pronounced compression of visual space on the horizontal compared to the vertical meridian.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9012886 | PMC |
http://dx.doi.org/10.1167/jov.22.5.4 | DOI Listing |
Int J Med Inform
December 2024
School of Medicine, Anhui University of Science & Technology, Huainan 232001, PR China. Electronic address:
Background: Patients with end-stage renal disease (ESRD) undergoing hemodialysis (HD) exhibit a high mortality risk, particularly at the onset of treatment. Conventional risk assessment models, dependent on extensive temporal data accumulation, frequently encounter issues of data incompleteness and lengthy collection periods.
Objective: This study addresses the imbalance in short-term HD data and the issue of missing data features, achieving a robust assessment of mortality risk for HD patients over the subsequent 30 to 450 days.
Med Phys
December 2024
School of Artificial Intelligence, Hebei University of Technology, Tianjin, China.
Background: Accurate musculoseletal ultrasound (MSKUS) image segmentation is crucial for diagnosis and treatment planning. Compared with traditional segmentation methods, deploying deep learning segmentation methods that balance segmentation efficiency, accuracy, and model size on edge devices has greater advantages.
Purpose: This paper aims to design a MSKUS image segmentation method that has fewer parameters, lower computation complexity and higher segmentation accuracy.
Sensors (Basel)
October 2024
Department of Computer Science and Engineering, Hanyang University, Seoul 04763, Republic of Korea.
Conventional approaches to video action recognition perform global attention over the entire video patches, which may be ineffective due to the temporal redundancy of video frames. Recent works on masked video modeling adopt a high-ratio tube masking and reconstruction strategy as a pre-training method to mitigate the problem of focusing on spatial features well but not on temporal features. Inspired by this pre-training method, we propose Fusion Attention for Action Recognition (FAR), which fuses the sparse-dense attention patterns specialized for temporal features with global attention during fine-tuning.
View Article and Find Full Text PDFBMC Med Inform Decis Mak
October 2024
Department of Mathematics, Dilla University, Dilla, Ethiopia.
IEEE Trans Neural Netw Learn Syst
September 2024
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!