Typically, deep learning models for image segmentation tasks are trained using large datasets of images annotated at the pixel level, which can be expensive and highly time-consuming. A way to reduce the amount of annotated images required for training is to adopt a semi-supervised approach. In this regard, generative deep learning models, concretely Generative Adversarial Networks (GANs), have been adapted to semi-supervised training of segmentation tasks.
View Article and Find Full Text PDFThe application of deep learning to image and video processing has become increasingly popular nowadays. Employing well-known pre-trained neural networks for detecting and classifying objects in images is beneficial in a wide range of application fields. However, diverse impediments may degrade the performance achieved by those neural networks.
View Article and Find Full Text PDFAnomaly detection in sequences is a complex problem in security and surveillance. With the exponential growth of surveillance cameras in urban roads, automating them to analyze the data and automatically identify anomalous events efficiently is essential. This paper presents a methodology to detect anomalous events in urban sequences using pre-trained convolutional neural networks (CNN) and super-resolution (SR) models.
View Article and Find Full Text PDFIn this work we implement a COVID-19 infection detection system based on chest X-ray images with uncertainty estimation. Uncertainty estimation is vital for safe usage of computer aided diagnosis tools in medical applications. Model estimations with high uncertainty should be carefully analyzed by a trained radiologist.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2019
Image segmentation is a common goal in many medical applications, as its use can improve diagnostic capability and outcome prediction. In order to assess the wound healing rate in diabetic foot ulcers, some parameters from the wound area are measured. However, heterogeneity of diabetic skin lesions and the noise present in images captured by digital cameras make wound extraction a difficult task.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
January 2020
Self-organizing maps (SOMs) are aimed to learn a representation of the input distribution which faithfully describes the topological relations among the clusters of the distribution. For some data sets and applications, it is known beforehand that some regions of the input space cannot contain any samples. Those are known as forbidden regions.
View Article and Find Full Text PDFOne of the most important challenges in computer vision applications is the background modeling, especially when the background is dynamic and the input distribution might not be stationary, i.e. the distribution of the input data could change with time (e.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2017
The growing neural gas (GNG) self-organizing neural network stands as one of the most successful examples of unsupervised learning of a graph of processing units. Despite its success, little attention has been devoted to its extension to a hierarchical model, unlike other models such as the self-organizing map, which has many hierarchical versions. Here, a hierarchical GNG is presented, which is designed to learn a tree of graphs.
View Article and Find Full Text PDFIn this work, a novel self-organizing model called growing neural forest (GNF) is presented. It is based on the growing neural gas (GNG), which learns a general graph with no special provisions for datasets with separated clusters. On the contrary, the proposed GNF learns a set of trees so that each tree represents a connected cluster of data.
View Article and Find Full Text PDFThe original Self-Organizing Feature Map (SOFM) has been extended in many ways to suit different goals and application domains. However, the topologies of the map lattice that we can found in literature are nearly always square or, more rarely, hexagonal. In this paper we study alternative grid topologies, which are derived from the geometrical theory of tessellations.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
August 2013
The quality of self-organizing maps is always a key issue to practitioners. Smooth maps convey information about input data sets in a clear manner. Here a method is presented to modify the learning algorithm of self-organizing maps to reduce the number of topology errors, hence the obtained map has better quality at the expense of increased quantization error.
View Article and Find Full Text PDFGrowing hierarchical self-organizing models are characterized by the flexibility of their structure, which can easily accommodate for complex input datasets. However, most proposals use the Euclidean distance as the only error measure. Here we propose a way to introduce Bregman divergences in these models, which is based on stochastic approximation principles, so that more general distortion measures can be employed.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
April 2014
The estimation of multivariate probability density functions has traditionally been carried out by mixtures of parametric densities or by kernel density estimators. Here we present a new nonparametric approach to this problem which is based on the integration of several multivariate histograms, computed over affine transformations of the training data. Our proposal belongs to the class of averaged histogram density estimators.
View Article and Find Full Text PDFBackground modeling and foreground detection are key parts of any computer vision system. These problems have been addressed in literature with several probabilistic approaches based on mixture models. Here we propose a new kind of probabilistic background models which is based on probabilistic self-organising maps.
View Article and Find Full Text PDFSince the introduction of the growing hierarchical self-organizing map, much work has been done on self-organizing neural models with a dynamic structure. These models allow adjusting the layers of the model to the features of the input dataset. Here we propose a new self-organizing model which is based on a probabilistic mixture of multivariate Gaussian components.
View Article and Find Full Text PDFKernel regression is a non-parametric estimation technique which has been successfully applied to image denoising and enhancement in recent times. Magnetic resonance 3D image denoising has two features that distinguish it from other typical image denoising applications, namely the tridimensional structure of the images and the nature of the noise, which is Rician rather than Gaussian or impulsive. Here we propose a principled way to adapt the general kernel regression framework to this particular problem.
View Article and Find Full Text PDFThe original self-organizing feature map did not define any probability distribution on the input space. However, the advantages of introducing probabilistic methodologies into self-organizing map models were soon evident. This has led to a wide range of proposals which reflect the current emergence of probabilistic approaches to computational intelligence.
View Article and Find Full Text PDFWe present a self-organizing map model to study qualitative data (also called categorical data). It is based on a probabilistic framework which does not assume any prespecified distribution of the input data. Stochastic approximation theory is used to develop a learning rule that builds an approximation of a discrete distribution on each unit.
View Article and Find Full Text PDFInt J Neural Syst
October 2009
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators.
View Article and Find Full Text PDFIn this paper, we present a probabilistic neural model, which extends Kohonen's self-organizing map (SOM) by performing a probabilistic principal component analysis (PPCA) at each neuron. Several SOMs have been proposed in the literature to capture the local principal subspaces, but our approach offers a probabilistic model while it has a low complexity on the dimensionality of the input space. This allows to process very high-dimensional data to obtain reliable estimations of the probability densities which are based on the PPCA framework.
View Article and Find Full Text PDFWe present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.
View Article and Find Full Text PDFThe original Kohonen's Self-Organizing Map model has been extended by several authors to incorporate an underlying probability distribution. These proposals assume mixtures of Gaussian probability densities. Here we present a new self-organizing model which is based on a mixture of multivariate Student-t components.
View Article and Find Full Text PDFWe present a new neural model that extends the classical competitive learning by performing a principal components analysis (PCA) at each neuron. This model represents an improvement with respect to known local PCA methods, because it is not needed to present the entire data set to the network on each computing step. This allows a fast execution while retaining the dimensionality-reduction properties of the PCA.
View Article and Find Full Text PDFWe propose a new self-organizing neural model that performs principal components analysis. It is also related to the adaptive subspace self-organizing map (ASSOM) network, but its training equations are simpler. Experimental results are reported, which show that the new model has better performance than the ASSOM network.
View Article and Find Full Text PDF