Robust perception systems allow farm robots to recognize weeds and vegetation, enabling the selective application of fertilizers and herbicides to mitigate the environmental impact of traditional agricultural practices. Today's perception systems typically rely on deep learning to interpret sensor data for tasks such as distinguishing soil, crops, and weeds. These approaches usually require substantial amounts of manually labeled training data, which is often time-consuming and requires domain expertise. This paper aims to reduce this limitation and propose an automated labeling pipeline for crop-weed semantic image segmentation in managed agricultural fields. It allows the training of deep learning models without or with only limited manual labeling of images. Our system uses RGB images recorded with unmanned aerial or ground robots operating in the field to produce semantic labels exploiting the field row structure for spatially consistent labeling. We use the rows previously detected to identify multiple crop rows, reducing labeling errors and improving consistency. We further reduce labeling errors by assigning an "unknown" class to challenging-to-segment vegetation. We use evidential deep learning because it provides predictions uncertainty estimates that we use to refine and improve our predictions. In this way, the evidential deep learning assigns high uncertainty to the weed class, as it is often less represented in the training data, allowing us to use the uncertainty to correct the semantic predictions. Experimental results suggest that our approach outperforms general-purpose labeling methods applied to crop fields by a large margin and domain-specific approaches on multiple fields and crop species. Using our generated labels to train deep learning models boosts our prediction performance on previously unseen fields with respect to unseen crop species, growth stages, or different lighting conditions. We obtain an IoU of 88.6% on crops, and 22.7% on weeds for a managed field of sugarbeets, where fully supervised methods have 83.4% on crops and 33.5% on weeds and other unsupervised domain-specific methods get 54.6% on crops and 11.2% on weeds. Finally, our method allows fine-tuning models trained in a fully supervised fashion to improve their performance in unseen field conditions up to +17.6% in mean IoU without additional manual labeling.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11893429 | PMC |
http://dx.doi.org/10.3389/frobt.2025.1548143 | DOI Listing |
JMIR Med Educ
March 2025
Division of Pulmonary, Critical Care, & Sleep Medicine, Department of Medicine, NYU Grossman School of Medicine, 550 First Avenue, 15th Floor, Medical ICU, New York, NY, 10016, United States, 1 2122635800.
Background: Although technology is rapidly advancing in immersive virtual reality (VR) simulation, there is a paucity of literature to guide its implementation into health professions education, and there are no described best practices for the development of this evolving technology.
Objective: We conducted a qualitative study using semistructured interviews with early adopters of immersive VR simulation technology to investigate use and motivations behind using this technology in educational practice, and to identify the educational needs that this technology can address.
Methods: We conducted 16 interviews with VR early adopters.
Br J Radiol
March 2025
Department of Medical Ultrasound, Shandong Medicine and Health Key Laboratory of Abdominal Medical Imaging, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China.
Objectives: To develop a deep learning (DL) model based on ultrasound (US) images of lymph nodes for predicting cervical lymph node metastasis (CLNM) in postoperative patients with differentiated thyroid carcinoma (DTC).
Methods: Retrospective collection of 352 lymph nodes from 330 patients with cytopathology findings between June 2021 and December 2023 at our institution. The database was randomly divided into the training and test cohort at an 8:2 ratio.
Sci Adv
March 2025
College of Computer Science and Technology, Zhejiang University, Hangzhou, China.
Brain age gap (BAG), the deviation between estimated brain age and chronological age, is a promising marker of brain health. However, the genetic architecture and reliable targets for brain aging remains poorly understood. In this study, we estimate magnetic resonance imaging (MRI)-based brain age using deep learning models trained on the UK Biobank and validated with three external datasets.
View Article and Find Full Text PDFSci Adv
March 2025
Department of Neurology, Johns Hopkins University, Baltimore, MD 21205, USA.
There is great interest in using genetically tractable organisms such as to gain insights into the regulation and function of sleep. However, sleep phenotyping in has largely relied on simple measures of locomotor inactivity. Here, we present FlyVISTA, a machine learning platform to perform deep phenotyping of sleep in flies.
View Article and Find Full Text PDFBiomacromolecules
March 2025
Department of Physics, University of Central Florida, Orlando, Florida 32816-2385, United States.
We use a combination of Brownian dynamics (BD) simulation results and deep learning (DL) strategies for the rapid identification of large structural changes caused by missense mutations in intrinsically disordered proteins (IDPs). We used ∼6500 IDP sequences from MobiDB database of length 20-300 to obtain gyration radii from BD simulation on a coarse-grained single-bead amino acid model (HPS2 model) used by us and others [Dignon, G. L.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!