Multiple attention-based models that recognize objects via a sequence of glimpses have reported results on handwritten numeral recognition. However, no attention-tracking data for handwritten numeral or alphabet recognition is available. Availability of such data would allow attention-based models to be evaluated in comparison to human performance. We collect mouse-click attention tracking data from 382 participants trying to recognize handwritten numerals and alphabets (upper and lowercase) from images via sequential sampling. Images from benchmark datasets are presented as stimuli. The collected dataset, called AttentionMNIST, consists of a sequence of sample (mouse click) locations, predicted class label(s) at each sampling, and the duration of each sampling. On average, our participants observe only 12.8% of an image for recognition. We propose a baseline model to predict the location and the class(es) a participant will select at the next sampling. When exposed to the same stimuli and experimental conditions as our participants, a highly-cited attention-based reinforcement model falls short of human efficiency.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9971057 | PMC |
http://dx.doi.org/10.1038/s41598-023-29880-7 | DOI Listing |
Int J Biol Macromol
December 2024
Guilin University of Technology, Coll Chem & Bioengn, Guangxi, Guilin 541004, China; Guangxi Key Laboratory of Electrochemical and Magneto-chemical Functional Materials, College of Chemistry and Bioengineering, Guilin University of Technology, Guilin 541004, China.
Conductive hydrogels based on sodium alginate (SA) have potential applications in human activity monitoring and personal medical diagnosis due to their good conductivity and flexibility. However, most sensing SA-hydrogels exhibit poor mechanical properties and lack of self-healing, self-adhesive, and antibacterial properties, greatly limiting their practical applications. Therefore, in this paper, a multifunctional double-network PAA-SA hydrogel consisting of poly(acrylic acid) (PAA) and sodium alginate (SA) was prepared by a simple strategy.
View Article and Find Full Text PDFAdv Sci (Weinh)
December 2024
State Key Laboratory of Advanced Displays and Optoelectronics Technologies, Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology (HKUST), Hong Kong, China.
Data Brief
October 2024
VIT-AP University, Amaravati, Andhra Pradesh, India.
This dataset presents a comprehensive collection of handwritten Grantha characters, comprising numbers and vowels, gathered from participants spanning diverse age groups. Utilizing standard A4 sheets, participants were instructed to handwrite Grantha characters. The Grantha script encompasses 10 numbers and 34 vowels.
View Article and Find Full Text PDFACS Nano
June 2024
State Key Laboratory of Silicon and Advanced Semiconductor Materials, Cyrus Tang Center for Sensor Materials and Applications, School of Materials Science and Engineering, Zhejiang University, Hangzhou 310027, China.
Retina-inspired visual sensors play a crucial role in the realization of neuromorphic visual systems. Nevertheless, significant obstacles persist in the pursuit of achieving bidirectional synaptic behavior and attaining high performance in the context of photostimulation. In this study, we propose a reconfigurable all-optical controlled synaptic device based on the IGZO/SnO/SnS heterostructure, which integrates sensing, storage and processing functions.
View Article and Find Full Text PDFSensors (Basel)
May 2024
Department of Civil and Environmental Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada.
Structural engineers are often required to draw two-dimensional engineering sketches for quick structural analysis, either by hand calculation or using analysis software. However, calculation by hand is slow and error-prone, and the manual conversion of a hand-drawn sketch into a virtual model is tedious and time-consuming. This paper presents a complete and autonomous framework for converting a hand-drawn engineering sketch into an analyzed structural model using a camera and computer vision.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!