Publications by authors named "MN Vrahatis"

Evolutionary music composition is a prominent technique for automatic music generation. The immense adaptation potential of evolutionary algorithms has allowed the realisation of systems that automatically produce music through feature and interactive-based composition approaches. Feature-based composition employs qualitatively descriptive music features as fitness landmarks.

View Article and Find Full Text PDF

In the present manuscript we propose a lattice free multiscale model for avascular tumor growth that takes into account the biochemical environment, mitosis, necrosis, cellular signaling and cellular mechanics. This model extends analogous approaches by assuming a function that incorporates the biochemical energy level of the tumor cells and a mechanism that simulates the behavior of cancer stem cells. Numerical simulations of the model are used to investigate the morphology of the tumor at the avascular phase.

View Article and Find Full Text PDF

Determining good initial conditions for an algorithm used to train a neural network is considered a parameter estimation problem dealing with uncertainty about the initial weights. Interval analysis approaches model uncertainty in parameter estimation problems using intervals and formulating tolerance problems. Solving a tolerance problem is defining lower and upper bounds of the intervals so that the system functionality is guaranteed within predefined limits.

View Article and Find Full Text PDF

We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., deterministic training algorithms in which error function values are allowed to increase at some epochs.

View Article and Find Full Text PDF

A novel generalized theoretical result is presented that underpins the development of globally convergent first-order batch training algorithms which employ local learning rates. This result allows us to equip algorithms of this class with a strategy for adapting the overall direction of search to a descent one. In this way, a decrease of the batch-error measure at each training iteration is ensured, and convergence of the sequence of weight iterates to a local minimizer of the batch error function is obtained from remote initial weights.

View Article and Find Full Text PDF

A genetic map based on microsatellite polymorphisms and visible mutations of the Mediterranean fruit fly (medfly), Ceratitis capitata is presented. Genotyping was performed on single flies from several backcross families. The map is composed of 67 microsatellites and 16 visible markers distributed over four linkage groups.

View Article and Find Full Text PDF

Objective: The paper aims at improving the prediction of superficial bladder recurrence. To this end, feedforward neural networks (FNNs) and a feature selection method based on unsupervised clustering, were employed.

Material And Methods: A retrospective prognostic study of 127 patients diagnosed with superficial urinary bladder cancer was performed.

View Article and Find Full Text PDF

Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine.

View Article and Find Full Text PDF

The development of microarray technologies gives scientists the ability to examine, discover and monitor the mRNA transcript levels of thousands of genes in a single experiment. Nonetheless, the tremendous amount of data that can be obtained from microarray studies presents a challenge for data analysis. The most commonly used computational approach for analyzing microarray data is cluster analysis, since the number of genes is usually very high compared to the number of samples.

View Article and Find Full Text PDF

This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations.

View Article and Find Full Text PDF

The issue of variable stepsize in the backpropagation training algorithm has been widely investigated and several techniques employing heuristic factors have been suggested to improve training time and reduce convergence to local minima. In this contribution, backpropagation training is based on a modified steepest descent method which allows variable stepsize. It is computationally efficient and posseses interesting convergence properties utilizing estimates of the Lipschitz constant without any additional computational cost.

View Article and Find Full Text PDF