Severity: Warning
Message: fopen(/var/lib/php/sessions/ci_sessionpduuere2d5fjm71d02011t7suhk53jak): Failed to open stream: No space left on device
Filename: drivers/Session_files_driver.php
Line Number: 177
Backtrace:
File: /var/www/html/index.php
Line: 316
Function: require_once
Severity: Warning
Message: session_start(): Failed to read session data: user (path: /var/lib/php/sessions)
Filename: Session/Session.php
Line Number: 137
Backtrace:
File: /var/www/html/index.php
Line: 316
Function: require_once
IEEE Trans Cybern
Published: July 2022
Accurately classifying sceneries with different spatial configurations is an indispensable technique in computer vision and intelligent systems, for example, scene parsing, robot motion planning, and autonomous driving. Remarkable performance has been achieved by the deep recognition models in the past decade. As far as we know, however, these deep architectures are incapable of explicitly encoding the human visual perception, that is, the sequence of gaze movements and the subsequent cognitive processes. In this article, a biologically inspired deep model is proposed for scene classification, where the human gaze behaviors are robustly discovered and represented by a unified deep active learning (UDAL) framework. More specifically, to characterize objects' components with varied sizes, an objectness measure is employed to decompose each scenery into a set of semantically aware object patches. To represent each region at a low level, a local-global feature fusion scheme is developed which optimally integrates multimodal features by automatically calculating each feature's weight. To mimic the human visual perception of various sceneries, we develop the UDAL that hierarchically represents the human gaze behavior by recognizing semantically important regions within the scenery. Importantly, UDAL combines the semantically salient region detection and the deep gaze shifting path (GSP) representation learning into a principled framework, where only the partial semantic tags are required. Meanwhile, by incorporating the sparsity penalty, the contaminated/redundant low-level regional features can be intelligently avoided. Finally, the learned deep GSP features from the entire scene images are integrated to form an image kernel machine, which is subsequently fed into a kernel SVM to classify different sceneries. Experimental evaluations on six well-known scenery sets (including remote sensing images) have shown the competitiveness of our approach.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TCYB.2020.2981480 | DOI Listing |
Neural Netw
March 2025
School of Artificial Intelligence, Xidian University, Xi'an 710119, China. Electronic address:
Land use and land cover (LULC) classification is a popular research area in remote sensing. The information of single-modal data is insufficient for accurate classification, especially in complex scenes, while the complementarity of multi-modal data such as hyperspectral images (HSIs) and light detection and ranging (LiDAR) data could effectively improve classification performance. The attention mechanism has recently been widely used in multi-modal LULC classification methods to achieve better feature representation.
View Article and Find Full Text PDFSci Rep
March 2025
Mathematical and Physical Sciences, Wuhan Textile University, Wuhan, China.
Remote sensing images present formidable classification challenges due to their complex spatial organization, high inter-class similarity, and significant intra-class variability. To address the balance between computational efficiency and feature extraction capability in existing methods, this paper innovatively proposes a lightweight convolutional network, STConvNeXt. In its architectural design, the model incorporates a split-based mobile convolution module with a hierarchical tree structure.
View Article and Find Full Text PDFHeliyon
February 2025
Guangxi Key Laboratory of Forest Ecology and Conservation, Nanning, 530004, China.
Point cloud classification, as one of the key techniques for point cloud data processing, is an important step for the application of point cloud data. However, single-point-based point cloud classification faces the challenge of poor robustness, and single-scale point clusters only consider a single neighborhood, leading to insufficient feature representation. In addition, numerous cluster-based classification methods require further development in constructing point clusters and extracting their features for representing point cloud objects.
View Article and Find Full Text PDFMethodsX
June 2025
JNN College of Engineering, Shimoga, Karnataka, India.
Scene classification plays a vital role in various computer vision applications, but building deep learning models from scratch is a very time-intensive process. Transfer learning is an excellent classification method using the predefined model. In our proposed work, we introduce a novel method of multimodal feature extraction and a feature selection technique to improve the efficiency of transfer learning in scene classification.
View Article and Find Full Text PDFbioRxiv
March 2025
Computational Perception Laboratory, Department of Psychology, Florida Gulf Coast University, Fort Myers FL 33965.
To correctly parse the visual scene, one must detect edges and determine their underlying cause. Previous work has demonstrated that image-computable neural networks trained to differentiate natural shadow and occlusion edges exhibited sensitivity to boundary sharpness and texture differences. Although these models showed a strong correlation with human performance on an edge classification task, this previous study did not directly investigate whether humans actually make use of boundary sharpness and texture cues when classifying edges as shadows or occlusions.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!
© LitMetric 2025. All rights reserved.