Publications by authors named "Wei-Keng Liao"

Recent progress in deep learning has significantly impacted materials science, leading to accelerated material discovery and innovation. ElemNet, a deep neural network model that predicts formation energy from elemental compositions, exemplifies the application of deep learning techniques in this field. However, the "black-box" nature of deep learning models often raises concerns about their interpretability and reliability.

View Article and Find Full Text PDF

Traditionally, materials discovery has been driven more by evidence and intuition than by systematic design. However, the advent of "big data" and an exponential increase in computational power have reshaped the landscape. Today, we use simulations, artificial intelligence (AI), and machine learning (ML) to predict materials characteristics, which dramatically accelerates the discovery of novel materials.

View Article and Find Full Text PDF

Modern data mining techniques using machine learning (ML) and deep learning (DL) algorithms have been shown to excel in the regression-based task of materials property prediction using various materials representations. In an attempt to improve the predictive performance of the deep neural network model, researchers have tried to add more layers as well as develop new architectural components to create sophisticated and deep neural network models that can aid in the training process and improve the predictive ability of the final model. However, usually, these modifications require a lot of computational resources, thereby further increasing the already large model training time, which is often not feasible, thereby limiting usage for most researchers.

View Article and Find Full Text PDF

Modern machine learning (ML) and deep learning (DL) techniques using high-dimensional data representations have helped accelerate the materials discovery process by efficiently detecting hidden patterns in existing datasets and linking input representations to output properties for a better understanding of the scientific phenomenon. While a deep neural network comprised of fully connected layers has been widely used for materials property prediction, simply creating a deeper model with a large number of layers often faces with vanishing gradient problem, causing a degradation in the performance, thereby limiting usage. In this paper, we study and propose architectural principles to address the question of improving the performance of model training and inference under fixed parametric constraints.

View Article and Find Full Text PDF

The applications of artificial intelligence, machine learning, and deep learning techniques in the field of materials science are becoming increasingly common due to their promising abilities to extract and utilize data-driven information from available data and accelerate materials discovery and design for future applications. In an attempt to assist with this process, we deploy predictive models for multiple material properties, given the composition of the material. The deep learning models described here are built using a cross-property deep transfer learning technique, which leverages source models trained on large data sets to build target models on small data sets with different properties.

View Article and Find Full Text PDF

There are two broad modeling paradigms in scientific applications: forward and inverse. While forward modeling estimates the observations based on known causes, inverse modeling attempts to infer the causes given the observations. Inverse problems are usually more critical as well as difficult in scientific applications as they seek to explore the causes that cannot be directly observed.

View Article and Find Full Text PDF

While experiments and DFT-computations have been the primary means for understanding the chemical and physical properties of crystalline materials, experiments are expensive and DFT-computations are time-consuming and have significant discrepancies against experiments. Currently, predictive modeling based on DFT-computations have provided a rapid screening method for materials candidates for further DFT-computations and experiments; however, such models inherit the large discrepancies from the DFT-based training data. Here, we demonstrate how AI can be leveraged together with DFT to compute materials properties more accurately than DFT itself by focusing on the critical materials science task of predicting "formation energy of a material given its structure and composition".

View Article and Find Full Text PDF

There have been many efforts in the last decade in the health informatics community to develop systems that can automatically recognize and predict disclosures on social media. However, a majority of such efforts have focused on simple topic prediction or sentiment classification. However, taboo disclosures on social media that people are not comfortable to talk with their friends represent an abstract theme dependent on context and background.

View Article and Find Full Text PDF

Artificial intelligence (AI) and machine learning (ML) have been increasingly used in materials science to build predictive models and accelerate discovery. For selected properties, availability of large databases has also facilitated application of deep learning (DL) and transfer learning (TL). However, unavailability of large datasets for a majority of properties prohibits widespread application of DL/TL.

View Article and Find Full Text PDF

The application of machine learning (ML) techniques in materials science has attracted significant attention in recent years, due to their impressive ability to efficiently extract data-driven linkages from various input materials representations to their output properties. While the application of traditional ML techniques has become quite ubiquitous, there have been limited applications of more advanced deep learning (DL) techniques, primarily because big materials datasets are relatively rare. Given the demonstrated potential and advantages of DL and the increasing availability of big materials datasets, it is attractive to go for deeper neural networks in a bid to boost model performance, but in reality, it leads to performance degradation due to the vanishing gradient problem.

View Article and Find Full Text PDF

The density and configurational changes of crystal dislocations during plastic deformation influence the mechanical properties of materials. These influences have become clearest in nanoscale experiments, in terms of strength, hardness and work hardening size effects in small volumes. The mechanical characterization of a model crystal may be cast as an inverse problem of deducing the defect population characteristics (density, correlations) in small volumes from the mechanical behavior.

View Article and Find Full Text PDF

The current predictive modeling techniques applied to Density Functional Theory (DFT) computations have helped accelerate the process of materials discovery by providing significantly faster methods to scan materials candidates, thereby reducing the search space for future DFT computations and experiments. However, in addition to prediction error against DFT-computed properties, such predictive models also inherit the DFT-computation discrepancies against experimentally measured properties. To address this challenge, we demonstrate that using deep transfer learning, existing large DFT-computational data sets (such as the Open Quantum Materials Database (OQMD)) can be leveraged together with other smaller DFT-computed data sets as well as available experimental observations to build robust prediction models.

View Article and Find Full Text PDF

Organic solar cells are an inexpensive, flexible alternative to traditional silicon-based solar cells but disadvantaged by low power conversion efficiency due to empirical design and complex manufacturing processes. This process can be accelerated by generating a comprehensive set of potential candidates. However, this would require a laborious trial and error method of modeling all possible polymer configurations.

View Article and Find Full Text PDF

Conventional machine learning approaches for predicting material properties from elemental compositions have emphasized the importance of leveraging domain knowledge when designing model inputs. Here, we demonstrate that by using a deep learning approach, we can bypass such manual feature engineering requiring domain knowledge and achieve much better results, even with only a few thousand training samples. We present the design and implementation of a deep neural network model referred to as ElemNet; it automatically captures the physical and chemical interactions and similarities between different elements using artificial intelligence which allows it to predict the materials properties with better accuracy and speed.

View Article and Find Full Text PDF

We present a deep learning approach to the indexing of electron backscatter diffraction (EBSD) patterns. We design and implement a deep convolutional neural network architecture to predict crystal orientation from the EBSD patterns. We design a differentiable approximation to the disorientation function between the predicted crystal orientation and the ground truth; the deep learning model optimizes for the mean disorientation error between the predicted crystal orientation and the ground truth using stochastic gradient descent.

View Article and Find Full Text PDF

Compared to Beowulf clusters and shared-memory machines, GPU and FPGA are emerging alternative architectures that provide massive parallelism and great computational capabilities. These architectures can be utilized to run compute-intensive algorithms to analyze ever-enlarging datasets and provide scalability. In this paper, we present four implementations of K-means data clustering algorithm for different high performance computing platforms.

View Article and Find Full Text PDF

Background: Pairwise statistical significance has been recognized to be able to accurately identify related sequences, which is a very important cornerstone procedure in numerous bioinformatics applications. However, it is both computationally and data intensive, which poses a big challenge in terms of performance and scalability.

Results: We present a GPU implementation to accelerate pairwise statistical significance estimation of local sequence alignment using standard substitution matrices.

View Article and Find Full Text PDF

Motivation: Recently, a number of programs have been proposed for mapping short reads to a reference genome. Many of them are heavily optimized for short-read mapping and hence are very efficient for shorter queries, but that makes them inefficient or not applicable for reads longer than 200 bp. However, many sequencers are already generating longer reads and more are expected to follow.

View Article and Find Full Text PDF