Spectrochim Acta A Mol Biomol Spectrosc
April 2022
Near-Infrared Spectroscopy (NIRS) has shown to be helpful in the study of rice, tea, cocoa, and other foods due to its versatility and reduced sample treatment. However, the high complexity of the data produced by NIR sensors makes necessary pre-treatments such as feature selection techniques that produce compact profiles. Supervised and unsupervised techniques have been tested, creating different subsets of features for classification, which affect the performance of the classifiers based on such compact profiles.
View Article and Find Full Text PDFThe reuse of business processes (BPs) requires similarities between them to be suitably identified. Various approaches have been introduced to address this problem, but many of them feature a high computational cost and a low level of automation. This paper presents a clustering algorithm that groups business processes retrieved from a multimodal search system (based on textual and structural information).
View Article and Find Full Text PDFSoftware test suites based on the concept of interaction testing are very useful for testing software components in an economical way. Test suites of this kind may be created using mathematical objects called covering arrays. A covering array, denoted by CA(N; t, k, v), is an N × k array over [Formula: see text] with the property that every N × t sub-array covers all t-tuples of [Formula: see text] at least once.
View Article and Find Full Text PDFDiscrete Math Algorithms Appl
June 2017
For ∈ ℤ, define Σ as the set of integers {0, 1, …, - 1}. Given an integer and a string of length ≥ over Σ , we count the number of times that each one of the distinct strings of length n over Σ occurs as a subsequence of . Our algorithm makes only one scan of and solves the problem in time complexity and space complexity + .
View Article and Find Full Text PDFJ Res Natl Inst Stand Technol
March 2016
SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space.
View Article and Find Full Text PDF