AI Article Synopsis

  • Measuring 3D chemical distribution in nanoscale materials has been challenging due to the rarity of inelastic scattering events, which require high beam exposure that can damage samples.
  • High-resolution 3D chemical imaging was successfully achieved at nearly one-nanometer resolution in various nanomaterials using a method called fused multi-modal electron tomography.
  • This technique significantly reduces radiation exposure by up to 99% by combining data from elastic and inelastic signals, allowing for accurate chemical mapping in complex materials.

Article Abstract

Measuring the three-dimensional (3D) distribution of chemistry in nanoscale matter is a longstanding challenge for metrological science. The inelastic scattering events required for 3D chemical imaging are too rare, requiring high beam exposure that destroys the specimen before an experiment is completed. Even larger doses are required to achieve high resolution. Thus, chemical mapping in 3D has been unachievable except at lower resolution with the most radiation-hard materials. Here, high-resolution 3D chemical imaging is achieved near or below one-nanometer resolution in an Au-FeO metamaterial within an organic ligand matrix, CoO-MnO core-shell nanocrystals, and ZnS-CuS nanomaterial using fused multi-modal electron tomography. Multi-modal data fusion enables high-resolution chemical tomography often with 99% less dose by linking information encoded within both elastic (HAADF) and inelastic (EDX/EELS) signals. We thus demonstrate that sub-nanometer 3D resolution of chemistry is measurable for a broad class of geometrically and compositionally complex materials.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11053043PMC
http://dx.doi.org/10.1038/s41467-024-47558-0DOI Listing

Publication Analysis

Top Keywords

fused multi-modal
8
multi-modal electron
8
electron tomography
8
chemical imaging
8
high-resolution chemical
8
resolution
5
imaging chemistry
4
chemistry resolution
4
resolution fused
4
tomography measuring
4

Similar Publications

MCBERT: A multi-modal framework for the diagnosis of autism spectrum disorder.

Biol Psychol

December 2024

Big Data Analytics and Web Intelligence Laboratory, Department of Computer Science & Engineering, Delhi Technological University, New Delhi, India. Electronic address:

Within the domain of neurodevelopmental disorders, autism spectrum disorder (ASD) emerges as a distinctive neurological condition characterized by multifaceted challenges. The delayed identification of ASD poses a considerable hurdle in effectively managing its impact and mitigating its severity. Addressing these complexities requires a nuanced understanding of data modalities and the underlying patterns.

View Article and Find Full Text PDF

Robust multi-modal fusion architecture for medical data with knowledge distillation.

Comput Methods Programs Biomed

December 2024

School of Biomedical Engineering, Capital Medical University, No.10, Xitoutiao, You An Men, Fengtai District, Beijing 100069, China; Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, No.10, Xitoutiao, You An Men, Fengtai District, Beijing 100069, China. Electronic address:

Background: The fusion of multi-modal data has been shown to significantly enhance the performance of deep learning models, particularly on medical data. However, missing modalities are common in medical data due to patient specificity, which poses a substantial challenge to the application of these models.

Objective: This study aimed to develop a novel and efficient multi-modal fusion framework for medical datasets that maintains consistent performance, even in the absence of one or more modalities.

View Article and Find Full Text PDF

Proteins govern most biological functions essential for life, and achieving controllable protein editing has made great advances in probing natural systems, creating therapeutic conjugates, and generating novel protein constructs. Recently, machine learning-assisted protein editing (MLPE) has shown promise in accelerating optimization cycles and reducing experimental workloads. However, current methods struggle with the vast combinatorial space of potential protein edits and cannot explicitly conduct protein editing using biotext instructions, limiting their interactivity with human feedback.

View Article and Find Full Text PDF

The multi-modal knowledge graph completion (MMKGC) task aims to automatically mine the missing factual knowledge from the existing multi-modal knowledge graphs (MMKGs), which is crucial in advancing cross-modal learning and reasoning. However, few methods consider the adverse effects caused by different missing modal information in the model learning process. To address the above challenges, we innovatively propose a odal quilibrium elational raph framwork, called .

View Article and Find Full Text PDF

Named entity recognition (NER) and relation extraction (RE) are two important technologies employed in knowledge extraction for constructing knowledge graphs. Uni-modal NER and RE approaches solely rely on text information for knowledge extraction, leading to various limitations, such as suboptimal performance and low efficiency in recognizing polysemous words. With the development of multi-modal learning, multi-modal named entity recognition (MNER) and multi-modal relation extraction (MRE) have been introduced to improve recognition performance.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!