The radiative transfer equations are well known, but radiation parametrizations in atmospheric models are computationally expensive. A promising tool for accelerating parametrizations is the use of machine learning techniques. In this study, we develop a machine learning-based parametrization for the gaseous optical properties by training neural networks to emulate a modern radiation parametrization (RRTMGP). To minimize computa- tional costs, we reduce the range of atmospheric conditions for which the neural networks are applicable and use machine-specific optimized BLAS functions to accelerate matrix computations. To generate training data, we use a set of randomly perturbed atmospheric profiles and calculate optical properties using RRTMGP. Predicted optical properties are highly accurate and the resulting radiative fluxes have average errors within 0.5 W m compared to RRTMGP. Our neural network-based gas optics parametrization is up to four times faster than RRTMGP, depending on the size of the neural networks. We further test the trade-off between speed and accuracy by training neural networks for the narrow range of atmospheric conditions of a single large-eddy simulation, so smaller and therefore faster networks can achieve a desired accuracy. We conclude that our machine learning-based parametrization can speed-up radiative transfer computations while retaining high accuracy. This article is part of the theme issue 'Machine learning for weather and climate modelling'.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7898130PMC
http://dx.doi.org/10.1098/rsta.2020.0095DOI Listing

Publication Analysis

Top Keywords

neural networks
20
optical properties
16
radiative transfer
12
transfer computations
8
machine learning-based
8
learning-based parametrization
8
training neural
8
range atmospheric
8
atmospheric conditions
8
neural
6

Similar Publications

Purpose: This study aimed to initially test whether machine learning approaches could categorically predict two simple biological features, mouse age and mouse species, using the retinal segmentation metrics.

Methods: The retinal layer thickness data obtained from C57BL/6 and DBA/2J mice were processed for machine learning after segmenting mouse retinal SD-OCT scans. Twenty-two models were trained to predict the mouse groups.

View Article and Find Full Text PDF

Currently, the World Health Organization (WHO) grade of meningiomas is determined based on the biopsy results. Therefore, accurate non-invasive preoperative grading could significantly improve treatment planning and patient outcomes. Considering recent advances in machine learning (ML) and deep learning (DL), this meta-analysis aimed to evaluate the performance of these models in predicting the WHO meningioma grade using imaging data.

View Article and Find Full Text PDF

Parkinson's disease (PD), a degenerative disorder of the central nervous system, is commonly diagnosed using functional medical imaging techniques such as single-photon emission computed tomography (SPECT). In this study, we utilized two SPECT data sets (n = 634 and n = 202) from different hospitals to develop a model capable of accurately predicting PD stages, a multiclass classification task. We used the entire three-dimensional (3D) brain images as input and experimented with various model architectures.

View Article and Find Full Text PDF

The advent of three-dimensional convolutional neural networks (3D CNNs) has revolutionized the detection and analysis of COVID-19 cases. As imaging technologies have advanced, 3D CNNs have emerged as a powerful tool for segmenting and classifying COVID-19 in medical images. These networks have demonstrated both high accuracy and rapid detection capabilities, making them crucial for effective COVID-19 diagnostics.

View Article and Find Full Text PDF

Spatially resolved transcriptomics technologies provide high-throughput measurements of gene expression in a tissue slice, but the sparsity of these data complicates analysis of spatial gene expression patterns. We address this issue by deriving a topographic map of a tissue slice-analogous to a map of elevation in a landscape-using a quantity called the isodepth. Contours of constant isodepths enclose domains with distinct cell type composition, while gradients of the isodepth indicate spatial directions of maximum change in expression.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!