An Approach to the Use of Depth Cameras for Weed Volume Estimation.

Sensors (Basel)

Center for Automation and Robotics, Spanish National Research Council, CSIC-UPM, Arganda del Rey, Madrid 28500, Spain.

Published: June 2016

The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4970024PMC
http://dx.doi.org/10.3390/s16070972DOI Listing

Publication Analysis

Top Keywords

depth cameras
8
height selection
8
good correlation
8
approach depth
4
weed
4
cameras weed
4
weed volume
4
volume estimation
4
estimation depth
4
cameras precision
4

Similar Publications

Monocular meta-imaging camera sees depth.

Light Sci Appl

January 2025

Innovation Center for Advanced Medical Imaging and Intelligent Medicine, Guangzhou Institute of Technology, Xidian University, Guangzhou, 510555, Guangdong, China.

A novel monocular depth-sensing camera based on meta-imaging sensor technology has been developed, offering more precise depth sensing with millimeter-level accuracy and enhanced robustness compared to conventional 2D and light-field cameras.

View Article and Find Full Text PDF

Advanced Driver Assistance Systems (ADAS) aim to automate transportation fully. A key part of this automation includes tasks such as traffic light detection and automatic braking. While indoor experiments are prevalent due to computational demands and safety concerns, there is a pressing need for research and development of new features to achieve complete automation, addressing real-world implementation challenges by testing them in outdoor environments.

View Article and Find Full Text PDF

The aim of the study was to determine the thickness of choroidal layers in mixed breed dogs suffering from retinal atrophy (RA) and showing symptoms of progressive retinal atrophy (PRA), with the use of SD-OCT. The study was performed on 50 dogs divided into two groups: 25 dogs diagnosed with retinal atrophy (RA) with PRA symptoms aged 1.5-14 years and 25 healthy dogs aged 2-12 years.

View Article and Find Full Text PDF

Purpose: Motion capture technology is quickly evolving providing researchers, clinicians, and coaches with more access to biomechanics data. Markerless motion capture and inertial measurement units (IMUs) are continually developing biomechanics tools that need validation for dynamic movements before widespread use in applied settings. This study evaluated the validity of a markerless motion capture, IMU, and red, green, blue, and depth (RGBD) camera system as compared to marker-based motion capture during countermovement jumps, overhead squats, lunges, and runs with cuts.

View Article and Find Full Text PDF

This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red-green-blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!