Artificial intelligence (AI) and machine learning (ML) aim to mimic human intelligence and enhance decision making processes across various fields. A key performance determinant in a ML model is the ratio between the training and testing dataset. This research investigates the impact of varying train-test split ratios on machine learning model performance and generalization capabilities using the BraTS 2013 dataset. Logistic regression, random forest, k nearest neighbors, and support vector machines were trained with split ratios ranging from 60:40 to 95:05. Findings reveal significant variations in accuracies across these ratios, emphasizing the critical need to strike a balance to avoid overfitting or underfitting. The study underscores the importance of selecting an optimal train-test split ratio that considers tradeoffs such as model performance metrics, statistical measures, and resource constraints. Ultimately, these insights contribute to a deeper understanding of how ratio selection impacts the effectiveness and reliability of machine learning applications across diverse fields.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11419616 | PMC |
http://dx.doi.org/10.7717/peerj-cs.2245 | DOI Listing |
Chaos
January 2025
AIMdyn, Inc., Santa Barbara, California 93101, USA.
Koopman operator theory has found significant success in learning models of complex, real-world dynamical systems, enabling prediction and control. The greater interpretability and lower computational costs of these models, compared to traditional machine learning methodologies, make Koopman learning an especially appealing approach. Despite this, little work has been performed on endowing Koopman learning with the ability to leverage its own failures.
View Article and Find Full Text PDFJAMA Netw Open
January 2025
National Center for Advancing Translational Sciences, National Institutes of Health, Bethesda, Maryland.
Importance: Digital health in biomedical research and its expanding list of potential clinical applications are rapidly evolving. A combination of new digital health technologies (DHTs), novel uses of existing DHTs through artificial intelligence- and machine learning-based algorithms, and improved integration and analysis of data from multiple sources has enabled broader use and delivery of these tools for research and health care purposes. The aim of this study was to assess the growth and overall trajectory of DHT funding through a National Institutes of Health (NIH)-wide grant portfolio analysis.
View Article and Find Full Text PDFJpn J Radiol
January 2025
Artificial Intelligence and Translational Imaging (ATI) Lab, Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece.
Objective: Calcific tendinopathy, predominantly affecting rotator cuff tendons, leads to significant pain and tendon degeneration. Although US-guided percutaneous irrigation (US-PICT) is an effective treatment for this condition, prediction of patient' s response and long-term outcomes remains a challenge. This study introduces a novel radiomics-based model to forecast patient outcomes, addressing a gap in the current predictive methodologies.
View Article and Find Full Text PDFInt Urol Nephrol
January 2025
Faculty of Medical Sciences, Pharmacology and Toxicology Department, University of Kragujevac, Kragujevac, Serbia.
Purposes: Intermediate-risk prostate cancer (IR PCa) is the most common risk group for localized prostate cancer. This study aimed to develop a machine learning (ML) model that utilizes biopsy predictors to estimate the probability of IR PCa and assess its performance compared to the traditional clinical model.
Methods: Between January 2017 and December 2022, patients with prostate-specific antigen (PSA) values of ≤ 20 ng/mL underwent transrectal ultrasonography-guided prostate biopsies.
Int J Comput Assist Radiol Surg
January 2025
Department of Radiology, University of Chicago, Chicago, IL, USA.
Purpose: Thyroid nodules are common, and ultrasound-based risk stratification using ACR's TIRADS classification is a key step in predicting nodule pathology. Determining thyroid nodule contours is necessary for the calculation of TIRADS scores and can also be used in the development of machine learning nodule diagnosis systems. This paper presents the development, validation, and multi-institutional independent testing of a machine learning system for the automatic segmentation of thyroid nodules on ultrasound.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!