Machine vision has demonstrated its usefulness in the livestock industry in terms of improving welfare in such areas as lameness detection and body condition scoring in dairy cattle. In this article, we present some promising results of applying state of the art object detection and classification techniques to insects, specifically Black Soldier Fly (BSF) and the domestic cricket, with the view of enabling automated processing for insect farming. We also present the low-cost "Insecto" Internet of Things (IoT) device, which provides environmental condition monitoring for temperature, humidity, CO, air pressure, and volatile organic compound levels together with high resolution image capture. We show that we are able to accurately count and measure size of BSF larvae and also classify the sex of domestic crickets by detecting the presence of the ovipositor. These early results point to future work for enabling automation in the selection of desirable phenotypes for subsequent generations and for providing early alerts should environmental conditions deviate from desired values.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8886630PMC
http://dx.doi.org/10.3389/fvets.2022.835529DOI Listing

Publication Analysis

Top Keywords

machine vision
8
vision insect
4
insect welfare
4
welfare monitoring
4
monitoring behavioural
4
behavioural insights
4
insights machine
4
vision demonstrated
4
demonstrated livestock
4
livestock industry
4

Similar Publications

ResViT FusionNet Model: An explainable AI-driven approach for automated grading of diabetic retinopathy in retinal images.

Comput Biol Med

January 2025

Department of Creative Technologies, Air University, Islamabad, 44000, Pakistan. Electronic address:

Background And Objective: Diabetic Retinopathy (DR) is a serious diabetes complication that can cause blindness if not diagnosed in its early stages. Manual diagnosis by ophthalmologists is labor-intensive and time-consuming, particularly in overburdened healthcare systems. This highlights the need for automated, accurate, and personalized machine learning approaches for early DR detection and treatment.

View Article and Find Full Text PDF

Purpose: In this study, we investigated the performance of deep learning (DL) models to differentiate between normal and glaucomatous visual fields (VFs) and classify glaucoma from early to the advanced stage to observe if the DL model can stage glaucoma as Mills criteria using only the pattern deviation (PD) plots. The DL model results were compared with a machine learning (ML) classifier trained on conventional VF parameters.

Methods: A total of 265 PD plots and 265 numerical datasets of Humphrey 24-2 VF images were collected from 119 normal and 146 glaucomatous eyes to train the DL models to classify the images into four groups: normal, early glaucoma, moderate glaucoma, and advanced glaucoma.

View Article and Find Full Text PDF

Urban focused semantically segmented datasets (e.g. ADE20k or CoCo) have been crucial in boosting research and applications in urban areas by providing rich sources of delineated objects in Street View Images (SVI).

View Article and Find Full Text PDF

Large-scale and long-term wildlife research and monitoring using camera traps: a continental synthesis.

Biol Rev Camb Philos Soc

January 2025

Wildlife Observatory of Australia (WildObs), Queensland Cyber Infrastructure Foundation (QCIF), Brisbane, Queensland, 4072, Australia.

Camera traps are widely used in wildlife research and monitoring, so it is imperative to understand their strengths, limitations, and potential for increasing impact. We investigated a decade of use of wildlife cameras (2012-2022) with a case study on Australian terrestrial vertebrates using a multifaceted approach. We (i) synthesised information from a literature review; (ii) conducted an online questionnaire of 132 professionals; (iii) hosted an in-person workshop of 28 leading experts representing academia, non-governmental organisations (NGOs), and government; and (iv) mapped camera trap usage based on all sources.

View Article and Find Full Text PDF

Perovskite-Based Smart Eyeglasses as Noncontact Human-Computer Interaction.

Adv Mater

January 2025

Key Laboratory of Green Printing, CAS Research/Education Center for Excellence in Molecular Sciences, Institute of Chemistry, Chinese Academy of Sciences (ICCAS), Beijing Engineering Research Center of Nanomaterials for Green Printing Technology, Beijing National Laboratory for Molecular Sciences (BNLMS), Beijing, 100080, P. R. China.

More than 70% of human information comes from vision. The eye is one of the most attractive sensing sites to collect biological parameters. However, it is urgent to develop a cost-effective and easy-to-use approach to monitor eyeball information in a minimally invasive way instead of current smart contact lenses or camera-based eyeglasses.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!