The effects of including the anisotropic E term to the dispersion energy in addition to the leading E term are examined by using the effective fragment potential (EFP) method on the S22 test set. In this study, the full anisotropic E term is computed whereas the isotropic and spherical approximations are used for the E term. It is found that the E term is positive for hydrogen-bonded complexes and has a magnitude that can be as large as 50% of E, giving rise to larger intermolecular distances than those obtained with E alone. The large positive value of E is analyzed for the hydrogen-bonded uracil dimer; it is found to originate from the large magnitude of the dynamic polarizability tensors as well as the proximity of the LMOs involved in hydrogen bonding. Conversely, E tends to be negative for dispersion-dominant complexes, and it has a very small magnitude for such complexes. The optimized geometries for these systems are therefore not greatly affected by the presence of the E term. For the mixed systems in the S22 test set, an intermediate behavior is observed. Overall, the E term is most important for systems with hydrogen bonding interactions and mixed systems. A full anisotropic treatment of the E term and higher order terms may need to be included to obtain more accurate interaction energies and geometries.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1021/acs.jpca.8b04451 | DOI Listing |
Eur Radiol Exp
January 2025
Computational Clinical Imaging Group (CCIG), Champalimaud Research, Champalimaud Foundation, Lisbon, Portugal.
Good practices in artificial intelligence (AI) model validation are key for achieving trustworthy AI. Within the cancer imaging domain, attracting the attention of clinical and technical AI enthusiasts, this work discusses current gaps in AI validation strategies, examining existing practices that are common or variable across technical groups (TGs) and clinical groups (CGs). The work is based on a set of structured questions encompassing several AI validation topics, addressed to professionals working in AI for medical imaging.
View Article and Find Full Text PDFTrop Anim Health Prod
January 2025
Animal Science Department, Federal University of Paraná, Palotina, PR, 85950-000, Brazil.
This study aimed to evaluate the effect of autolyzed yeast (obtained from culture of Saccharomyces cerevisiae in sugarcane derivatives) supplementation on diet digestibility, feeding behavior, levels of blood metabolites associated with protein and energy metabolism, and performance of Dorper × Santa Ines lambs finished in feedlot. Twenty-four non-castrated male lambs with an average age of 4 months and a body weight (BW) of 19.49 ± 3.
View Article and Find Full Text PDFEur Radiol Exp
January 2025
St Vincent's University Hospital, Dublin, Ireland.
Background: The large language model ChatGPT can now accept image input with the GPT4-vision (GPT4V) version. We aimed to compare the performance of GPT4V to pretrained U-Net and vision transformer (ViT) models for the identification of the progression of multiple sclerosis (MS) on magnetic resonance imaging (MRI).
Methods: Paired coregistered MR images with and without progression were provided as input to ChatGPT4V in a zero-shot experiment to identify radiologic progression.
Osteoporos Int
January 2025
Academy for Engineering and Technology, Fudan University, Shanghai, China.
Unlabelled: This study utilized deep learning for bone mineral density (BMD) prediction and classification using biplanar X-ray radiography (BPX) images from Huashan Hospital Medical Checkup Center. Results showed high accuracy and strong correlation with quantitative computed tomography (QCT) results. The proposed models offer potential for screening patients at a high risk of osteoporosis and reducing unnecessary radiation and costs.
View Article and Find Full Text PDFEur Radiol
January 2025
Department of Radiology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea.
Objective: This study aimed to develop an open-source multimodal large language model (CXR-LLaVA) for interpreting chest X-ray images (CXRs), leveraging recent advances in large language models (LLMs) to potentially replicate the image interpretation skills of human radiologists.
Materials And Methods: For training, we collected 592,580 publicly available CXRs, of which 374,881 had labels for certain radiographic abnormalities (Dataset 1) and 217,699 provided free-text radiology reports (Dataset 2). After pre-training a vision transformer with Dataset 1, we integrated it with an LLM influenced by the LLaVA network.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!