People live in a 3D world. However, existing works on person re-identification (re-id) mostly consider the semantic representation learning in a 2D space, intrinsically limiting the understanding of people. In this work, we address this limitation by exploring the prior knowledge of the 3D body structure. Specifically, we project 2D images to a 3D space and introduce a novel parameter-efficient omni-scale graph network (OG-Net) to learn the pedestrian representation directly from 3D point clouds. OG-Net effectively exploits the local information provided by sparse 3D points and takes advantage of the structure and appearance information in a coherent manner. With the help of 3D geometry information, we can learn a new type of deep re-id feature free from noisy variants, such as scale and viewpoint. To our knowledge, we are among the first attempts to conduct person re-id in the 3D space. We demonstrate through extensive experiments that the proposed method: (1) eases the matching difficulty in the traditional 2D space; 2) exploits the complementary information of 2D appearance and 3D structure; 3) achieves competitive results with limited parameters on four large-scale person re-id datasets; and 4) has good scalability to unseen datasets. Our code, models, and generated 3D human data are publicly available at https://github.com/layumi/person-reid-3d.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNNLS.2022.3214834 | DOI Listing |
BMC Med Inform Decis Mak
December 2024
Uppsala Monitoring Centre, Uppsala, Sweden.
Background: Automated recognition and redaction of personal identifiers in free text can enable organisations to share data while protecting privacy. This is important in the context of pharmacovigilance since relevant detailed information on the clinical course of events, differential diagnosis, and patient-reported reflections may often only be conveyed in narrative form. The aim of this study is to develop and evaluate a method for automated redaction of person names in English narrative text on adverse event reports.
View Article and Find Full Text PDFMed Image Anal
December 2024
University of Strasbourg, CAMMA, ICube, CNRS, INSERM, France; IHU Strasbourg, Strasbourg, France.
Accurate tool tracking is essential for the success of computer-assisted intervention. Previous efforts often modeled tool trajectories rigidly, overlooking the dynamic nature of surgical procedures, especially tracking scenarios like out-of-body and out-of-camera views. Addressing this limitation, the new CholecTrack20 dataset provides detailed labels that account for multiple tool trajectories in three perspectives: (1) intraoperative, (2) intracorporeal, and (3) visibility, representing the different types of temporal duration of tool tracks.
View Article and Find Full Text PDFSensors (Basel)
November 2024
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China.
Video-based pedestrian re-identification (Re-ID) is used to re-identify the same person across different camera views. One of the key problems is to learn an effective representation for the pedestrian from video. However, it is difficult to learn an effective representation from one single modality of a feature due to complicated issues with video, such as background, occlusion, and blurred scenes.
View Article and Find Full Text PDFSci Rep
November 2024
School of Computer Science and Technology (School of Cyberspace Security), Xinjiang University, Urumqi, 830046, China.
Visible-Infrared Person Re-identification (VI-ReID) has been consistently challenged by the significant intra-class variations and cross-modality differences between different cameras. Therefore, the key lies in how to extract discriminative modality-shared features. Existing VI-ReID methods based on Convolutional Neural Networks (CNN) and Vision Transformers (ViT) have shortcomings in capturing global features and controlling computational complexity, respectively.
View Article and Find Full Text PDFSci Rep
November 2024
School of Artificial Intelligence, Hebei University of Technology, Tianjin, 300401, China.
To tackle the high resource consumption in occluded person re-identification, sparse attention mechanisms based on Vision Transformers (ViTs) have become popular. However, they often suffer from performance degradation with long sequences, omission of crucial information, and token representation convergence. To address these issues, we introduce AIRHF-Net: an Adaptive Interaction Representation Hierarchical Fusion Network, named AIRHF-Net, designed to enhance pedestrian identity recognition in occluded scenarios.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!