Introduction: Urinary symptoms constitute the primary reason for female patients to consult their general practitioner. The urinary dipstick test serves as a cornerstone for diagnosing urinary tract infections (UTIs), yet traditional visual interpretation may be subject to variability. Automated devices for dipstick urinalysis are routinely used as alternatives, yet the evidence regarding their accuracy remains limited. Therefore we aimed to compare concordance between visual and automated urinary dipstick interpretation and determine their test characteristics for the prediction of bacteriuria.
Material And Methods: We conducted a prospective validation study including urine samples originating from adult patients in general practice that were sent to the Maastricht Medical Centre + for urinary culture. Urinary dipstick tests were performed on each sample, which were interpreted visually and automatically. We calculated Cohen's κ and percentage agreement and used 2 × 2 tables to calculate test characteristics.
Results: We included 302 urine samples. Visual and automated analysis showed almost perfect agreement (κ = 0.82 and κ = 0.86, respectively) for both nitrite and leukocyte esterase, but moderate agreement for erythrocytes (κ = 0.51). Interpretation of clinically relevant (nitrite and/or leukocyte esterase positive) samples showed almost perfect agreement (κ = 0.88). Urinary dipsticks show similar test characteristics with urinary culture as gold standard, with sensitivities of 0.92 and 0.91 and specificities of 0.37 and 0.41 for visual and automated interpretation respectively.
Conclusion: Automated and visual dipstick analysis show near perfect agreement and perform similarly in predicting bacteriuria. However, automated analysis requires maintenance and occasionally measurement errors can occur.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/02813432.2024.2392776 | DOI Listing |
Traffic Inj Prev
January 2025
National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Centre, Beijing, China.
Objective: Attention forms the foundation for the formation of situation awareness. Low situation awareness can lead to driving performance decline, which can be dangerous in driving. The goal of this study is to investigate how different types of pre-takeover tasks, involving cognitive, visual and physical resources engagement, as well as individual attentional function, affect driver's attention restoration in conditionally automated driving.
View Article and Find Full Text PDFMultimed Man Cardiothorac Surg
January 2025
New Cross Hospital, Royal Wolverhampton NHS Trust, Wolverhampton, United Kingdom.
Robotic-assisted thoracic surgery has become increasingly utilized in recent years. Complex lung cancer resection surgery can be performed using a robotic approach. It facilitates 3-dimentional visualization of structures, enhanced manipulation of tissues and precise movements.
View Article and Find Full Text PDFPlant Methods
January 2025
School of Electronic and Information Engineering, Liaoning Technical University, Huludao, 125105, China.
Apricot trees, serving as critical agricultural resources, hold a significant role within the agricultural domain. Conventional methods for detecting pests and diseases in these trees are notably labor-intensive. Many conditions affecting apricot trees manifest distinct visual symptoms that are ideally suited for precise identification and classification via deep learning techniques.
View Article and Find Full Text PDFInt J Cardiovasc Imaging
January 2025
Novo Nordisk Foundation Center for Protein Research, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
The initial evaluation of stenosis during coronary angiography is typically performed by visual assessment. Visual assessment has limited accuracy compared to fractional flow reserve and quantitative coronary angiography, which are more time-consuming and costly. Applying deep learning might yield a faster and more accurate stenosis assessment.
View Article and Find Full Text PDFMeat Sci
December 2024
Scotland's Rural College, West Mains Road, UK.
Three-dimensional (3D) measurements extracted from beef carcass images were used to predict the weight of four saleable meat yield (SMY) traits (total SMY and the SMY of the forequarter, flank, and hindquarter) and four primal cuts (sirloin, ribeye, topside and rump). Data were collected at two UK abattoirs using time-of-flight cameras and manual bone out methods. Predictions were made for 484 carcasses, using multiple linear regression (MLR) or machine learning (ML) techniques.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!