In grasping studies, maximum grip aperture (MGA) is commonly used as an indicator of the object size representation within the visuomotor system. However, a number of additional factors, such as movement safety, comfort, and efficiency, might affect the scaling of MGA with object size and potentially mask perceptual effects on actions. While unimanual grasping has been investigated for a wide range of object sizes, so far very small objects (<5 mm) have not been included. Investigating grasping of these tiny objects is particularly interesting because it allows us to evaluate the three most prominent explanatory accounts of grasping (the perception-action model, the digits-in-space hypothesis, and the biomechanical account) by comparing the predictions that they make for these small objects. In the first experiment, participants ( ) grasped and manually estimated the height of square cuboids with heights from 0.5 to 5 mm. In the second experiment, a different sample of participants ( ) performed the same tasks with square cuboids with heights from 5 to 20 mm. We determined MGAs, manual estimation apertures (MEA), and the corresponding just-noticeable differences (JND). In both experiments, MEAs scaled with object height and adhered to Weber's law. MGAs for grasping scaled with object height in the second experiment but not consistently in the first experiment. JNDs for grasping never scaled with object height. We argue that the digits-in-space hypothesis provides the most plausible account of the data. Furthermore, the findings highlight that the reliability of MGA as an indicator of object size is strongly task-dependent.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11281983 | PMC |
http://dx.doi.org/10.1007/s00426-024-01947-8 | DOI Listing |
Sci Rep
January 2025
School of Food and Pharmacy, Zhejiang Ocean University, Zhoushan, 316022, People's Republic of China.
Accurate and rapid segmentation of key parts of frozen tuna, along with precise pose estimation, is crucial for automated processing. However, challenges such as size differences and indistinct features of tuna parts, as well as the complexity of determining fish poses in multi-fish scenarios, hinder this process. To address these issues, this paper introduces TunaVision, a vision model based on YOLOv8 designed for automated tuna processing.
View Article and Find Full Text PDFPolymers (Basel)
January 2025
Department of Chemistry and Pharmacy, Interdisciplinary Center for Molecular Materials, Friedrich-Alexander Universität Erlangen-Nürnberg, Egerlandstr. 3, 91058 Erlangen, Germany.
pH-responsive polyamidoamine (PAMAM) dendrimers are used as well-defined building blocks to design light-switchable nano-assemblies in solution. The complex interplay between the photoresponsive di-anionic azo dye Acid Yellow 38 (AY38) and the cationic PAMAM dendrimers of different generations is presented in this study. Electrostatic self-assembly involving secondary dipole-dipole interactions provides well-defined assemblies within a broad size range (10 nm-1 μm) with various shapes.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Equipment Management and UAV Engineering School, Air Force Engineering University, Xi'an 710051, China.
To enable person detection tasks in surveillance footage to be deployed on edge devices and their efficient performance in resource-constrained environments in real-time, a lightweight person detection model based on YOLOv8n was proposed. This model balances high accuracy with low computational cost and parameter size. First, the MSBlock module was introduced into YOLOv8n.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Laboratory of Adaptive Lighting Systems and Visual Processing, Technical University of Darmstadt, Hochschulstr. 4a, 64289 Darmstadt, Germany.
Thermopile sensor arrays provide a sufficient counterbalance between person detection and localization while preserving privacy through low resolution. The latter is especially important in the context of smart building automation applications. Current research has shown that there are two machine learning-based algorithms that are particularly prominent for general object detection: You Only Look Once (YOLOv5) and Detection Transformer (DETR).
View Article and Find Full Text PDFSensors (Basel)
January 2025
College of Computer and Information Sciences, Fujian Agriculture and Forestry University, Fuzhou 350002, China.
Compared with conventional targets, small objects often face challenges such as smaller size, lower resolution, weaker contrast, and more background interference, making their detection more difficult. To address this issue, this paper proposes an improved small object detection method based on the YOLO11 model-PC-YOLO11s. The core innovation of PC-YOLO11s lies in the optimization of the detection network structure, which includes the following aspects: Firstly, PC-YOLO11s has adjusted the hierarchical structure of the detection network and added a P2 layer specifically for small object detection.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!