Publications by authors named "Hakan Cevikalp"

The classification loss functions used in deep neural network classifiers can be split into two categories based on maximizing the margin in either Euclidean or angular spaces. Euclidean distances between sample vectors are used during classification for the methods maximizing the margin in Euclidean spaces whereas the Cosine similarity distance is used during the testing stage for the methods maximizing the margin in the angular spaces. This article introduces a novel classification loss that maximizes the margin in both the Euclidean and angular spaces at the same time.

View Article and Find Full Text PDF

This article introduces two methods that find compact deep feature models for approximating images in set based face recognition problems. The proposed method treats each image set as a nonlinear face manifold that is composed of linear components. To find linear components of the face manifold, we first split image sets into subsets containing face images which share similar appearances.

View Article and Find Full Text PDF

This paper introduces a family of quasi-linear discriminants that outperform current large-margin methods in sliding window visual object detection and open set recognition tasks. In these applications, the classification problems are both numerically imbalanced - positive (object class) training and test windows are much rarer than negative (non-class) ones - and geometrically asymmetric - the positive samples typically form compact, visually-coherent groups while negatives are much more diverse, including anything at all that is not a well-centered sample from the target class. For such tasks, there is a need for discriminants whose decision regions focus on tightly circumscribing the positive class, while still taking account of negatives in zones where the two classes overlap.

View Article and Find Full Text PDF
Best Fitting Hyperplanes for Classification.

IEEE Trans Pattern Anal Mach Intell

June 2017

In this paper, we propose novel methods that are more suitable than classical large-margin classifiers for open set recognition and object detection tasks. The proposed methods use the best fitting hyperplanes approach, and the main idea is to find the best fitting hyperplanes such that each hyperplane is close to the samples of one of the classes and is as far as possible from the other class samples. To this end, we propose two different classifiers: The first classifier solves a convex quadratic optimization problem, but negative samples can lie on one side of the best fitting hyperplane.

View Article and Find Full Text PDF

It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures -- many of which are heuristic in nature -- have been developed for this goal.

View Article and Find Full Text PDF

The common vector (CV) method is a linear subspace classifier method which allows one to discriminate between classes of data sets, such as those arising in image and word recognition. This method utilizes subspaces that represent classes during classification. Each subspace is modeled such that common features of all samples in the corresponding class are extracted.

View Article and Find Full Text PDF

In some pattern recognition tasks, the dimension of the sample space is larger than the number of samples in the training set. This is known as the "small sample size problem". Linear discriminant analysis (LDA) techniques cannot be applied directly to the small sample size case.

View Article and Find Full Text PDF

In face recognition tasks, the dimension of the sample space is typically larger than the number of the samples in the training set. As a consequence, the within-class scatter matrix is singular and the Linear Discriminant Analysis (LDA) method cannot be applied directly. This problem is known as the "small sample size" problem.

View Article and Find Full Text PDF

Radiotherapy treatment planning integrating positron emission tomography (PET) and computerized tomography (CT) is rapidly gaining acceptance in the clinical setting. Although hybrid systems are available, often the planning CT is acquired on a dedicated system separate from the PET scanner. A limiting factor to using PET data becomes the accuracy of the CT/PET registration.

View Article and Find Full Text PDF