In this paper, we propose a new scheme that merges color- and shape-invariant information for object recognition. To obtain robustness against photometric changes, color-invariant derivatives are computed first. Color invariance is an important aspect of any object recognition scheme, as color changes considerably with the variation in illumination, object pose, and camera viewpoint. These color invariant derivatives are then used to obtain similarity invariant shape descriptors. Shape invariance is equally important as, under a change in camera viewpoint and object pose, the shape of a rigid object undergoes a perspective projection on the image plane. Then, the color and shape invariants are combined in a multidimensional color-shape context which is subsequently used as an index. As the indexing scheme makes use of a color-shape invariant context, it provides a high-discriminative information cue robust against varying imaging conditions. The matching function of the color-shape context allows for fast recognition, even in the presence of object occlusion and cluttering. From the experimental results, it is shown that the method recognizes rigid objects with high accuracy in 3-D complex scenes and is robust against changing illumination, camera viewpoint, object pose, and noise.

Download full-text PDF

Source
http://dx.doi.org/10.1109/tip.2005.860320DOI Listing

Publication Analysis

Top Keywords

object recognition
12
object pose
12
camera viewpoint
12
color shape
8
object
8
viewpoint object
8
color-shape context
8
shape
5
combining color
4
shape illumination-viewpoint
4

Similar Publications

Background: Acute neuroinflammatory and oxidative-stress (OS)-inducing stressors, such as high energy and charge (HZE) particle irradiation, produce accelerated aging in the brain. Anti-inflammatory and antioxidant foods, such as blueberries (BB), attenuate neuronal and cognitive deficits when administered to rodents before or both before and after HZE particle exposure. However, the effects of post-stressor treatments are unknown and may be important to repair initial damage and prevent progressive neurodegeneration.

View Article and Find Full Text PDF

Mid-level visual processing represents a crucial stage between basic sensory input and higher-level object recognition. The conventional model posits that fundamental visual qualities like color and motion are processed in specialized, retinotopic brain regions (e.g.

View Article and Find Full Text PDF

In order to reduce the number of parameters in the Chinese herbal medicine recognition model while maintaining accuracy, this paper takes 20 classes of Chinese herbs as the research object and proposes a recognition network based on knowledge distillation and cross-attention - ShuffleCANet (ShuffleNet and Cross-Attention). Firstly, transfer learning was used for experiments on 20 classic networks, and DenseNet and RegNet were selected as dual teacher models. Then, considering the parameter count and recognition accuracy, ShuffleNet was determined as the student model, and a new cross-attention mechanism was proposed.

View Article and Find Full Text PDF

Recognizing the action of plastic bag taking from CCTV video footage represents a highly specialized and niche challenge within the broader domain of action video classification. To address this challenge, our paper introduces a novel benchmark video dataset specifically curated for the task of identifying the action of grabbing a plastic bag. Additionally, we propose and evaluate three distinct baseline approaches.

View Article and Find Full Text PDF

Over recent years, automated Human Activity Recognition (HAR) has been an area of concern for many researchers due to its widespread application in surveillance systems, healthcare environments, and many more. This has led researchers to develop coherent and robust systems that efficiently perform HAR. Although there have been many efficient systems developed to date, still, there are many issues to be addressed.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!