Comprehensive empirical evaluation of feature extractors in computer vision.

PeerJ Comput Sci

Computer Engineering, Faculty of Engineering and Architecture, Kirsehir Ahi Evran University, Kirsehir, Turkey.

Published: November 2024

AI Article Synopsis

  • This study reviews several traditional feature detection and matching methods in computer vision, including SIFT, SURF, and ORB, highlighting their architectures and complexities.
  • It evaluates the performance of these algorithms using the Image Matching Challenge Photo Tourism 2020 dataset with over 1.5 million images, finding FAST combined with ORB as the fastest for feature extraction and matching.
  • The research reveals which algorithms, like AKAZE and ORB, perform best under different image transformations and disturbances, ensuring resilience and efficiency in various conditions.

Article Abstract

Feature detection and matching are fundamental components in computer vision, underpinning a broad spectrum of applications. This study offers a comprehensive evaluation of traditional feature detections and descriptors, analyzing methods such as Scale Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), Binary Robust Independent Elementary Features (BRIEF), Oriented FAST and Rotated BRIEF (ORB), Binary Robust Invariant Scalable Keypoints (BRISK), KAZE, Accelerated KAZE (AKAZE), Fast Retina Keypoint (FREAK), Dense and Accurate Invariant Scalable descriptor for Yale (DAISY), Features from Accelerated Segment Test (FAST), and STAR. Each feature extractor was assessed based on its architectural design and complexity, focusing on how these factors influence computational efficiency and robustness under various transformations. Utilizing the Image Matching Challenge Photo Tourism 2020 dataset, which includes over 1.5 million images, the study identifies the FAST algorithm as the most efficient detector when paired with the ORB descriptor and Brute-Force (BF) matcher, offering the fastest feature extraction and matching process. ORB is notably effective on affine-transformed and brightened images, while AKAZE excels in conditions involving blurring, fisheye distortion, image rotation, and perspective distortions. Through more than 2 million comparisons, the study highlights the feature extractors that demonstrate superior resilience across various conditions, including rotation, scaling, blurring, brightening, affine transformations, perspective distortions, fisheye distortion, and salt-and-pepper noise.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623105PMC
http://dx.doi.org/10.7717/peerj-cs.2415DOI Listing

Publication Analysis

Top Keywords

feature extractors
8
computer vision
8
binary robust
8
invariant scalable
8
fisheye distortion
8
perspective distortions
8
feature
7
comprehensive empirical
4
empirical evaluation
4
evaluation feature
4

Similar Publications

The identification of cancer driver genes is crucial for understanding the complex processes involved in cancer development, progression, and therapeutic strategies. Multi-omics data and biological networks provided by numerous databases enable the application of graph deep learning techniques that incorporate network structures into the deep learning framework. However, most existing methods do not account for the heterophily in the biological networks, which hinders the improvement of model performance.

View Article and Find Full Text PDF

TarIKGC: A Target Identification Tool Using Semantics-Enhanced Knowledge Graph Completion with Application to CDK2 Inhibitor Discovery.

J Med Chem

January 2025

State Key Laboratory of Anti-Infective Drug Discovery and Development, School of Pharmaceutical Sciences, Sun Yat-sen University, Guangzhou 510006, China.

Target identification is a critical stage in the drug discovery pipeline. Various computational methodologies have been dedicated to enhancing the classification performance of compound-target interactions, yet significant room remains for improving the recommendation performance. To address this challenge, we developed TarIKGC, a tool for target prioritization that leverages semantics enhanced knowledge graph (KG) completion.

View Article and Find Full Text PDF

Few-shot learning (FSL) methods have made remarkable progress in the field of plant disease recognition, especially in scenarios with limited available samples. However, current FSL approaches are usually limited to a restrictive setting where base classes and novel classes come from the same domain such as PlantVillage. Consequently, when the model is generalized to new domains (field disease datasets), its performance drops sharply.

View Article and Find Full Text PDF

Emotion recognition using multi-scale EEG features through graph convolutional attention network.

Neural Netw

December 2024

The school of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China. Electronic address:

Emotion recognition via electroencephalogram (EEG) signals holds significant promise across various domains, including the detection of emotions in patients with consciousness disorders, assisting in the diagnosis of depression, and assessing cognitive load. This process is critically important in the development and research of brain-computer interfaces, where precise and efficient recognition of emotions is paramount. In this work, we introduce a novel approach for emotion recognition employing multi-scale EEG features, denominated as the Dynamic Spatial-Spectral-Temporal Network (DSSTNet).

View Article and Find Full Text PDF

Emotion recognition is a critical research topic within affective computing, with potential applications across various domains. Currently, EEG-based emotion recognition, utilizing deep learning frameworks, has been effectively applied and achieved commendable performance. However, existing deep learning-based models face challenges in capturing both the spatial activity features and spatial topology features of EEG signals simultaneously.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!