IEEE Trans Cybern
November 2024
This work considers an extended flexible job-shop scheduling problem from a semiconductor manufacturing environment. To find its high-quality solution in a reasonable time, a learning-based genetic algorithm (LGA) that incorporates a parallel long short-term memory network-embedded autoencoder model is proposed. In it, genetic algorithm is selected as a main optimizer.
View Article and Find Full Text PDFA dendritic neuron model (DNM) is a deep neural network model with a unique dendritic tree structure and activation function. Effective initialization of its model parameters is crucial for its learning performance. This work proposes a novel initialization method specifically designed to improve the performance of DNM in classifying high-dimensional data, notable for its simplicity, speed, and straightforward implementation.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
November 2024
To construct a strong classifier ensemble, base classifiers should be accurate and diverse. However, there is no uniform standard for the definition and measurement of diversity. This work proposes a learners' interpretability diversity (LID) to measure the diversity of interpretable machine learners.
View Article and Find Full Text PDFMicromachines (Basel)
October 2022
Chemical functionalization of carbon support for Pt catalysts is a promising way to enhance the performance of catalysts. In this study, Pt/C catalysts grafted with various amounts of phenylsulfonic acid groups were prepared under mild conditions. The influence of sulfonic acid groups on the physiochemical characteristics and electrochemical activities of the modified catalysts were studied using X-ray diffraction, X-ray photoelectron spectroscopy, a transmission electron microscope, and cyclic voltammetry (CV).
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2022
This work proposes a decision tree (DT)-based method for initializing a dendritic neuron model (DNM). Neural networks become larger and larger, thus consuming more and more computing resources. This calls for a strong need to prune neurons that do not contribute much to their network's output.
View Article and Find Full Text PDF