Unmanned aerial vehicles (UAVs) have become a very popular way of acquiring video within contexts such as remote data acquisition or surveillance. Unfortunately, their viewpoint is often unstable, which tends to impact the automatic processing of their video flux negatively. To counteract the effects of an inconsistent viewpoint, two video processing strategies are classically adopted, namely registration and stabilization, which tend to be affected by distinct issues, namely jitter and drifting.
View Article and Find Full Text PDFThe computed tomography imaging spectrometer (CTIS) is a snapshot hyperspectral imaging system. Its output is a 2D image of multiplexed spatiospectral projections of the hyperspectral cube of the scene. Traditionally, the 3D cube is reconstructed from this image before further analysis.
View Article and Find Full Text PDFBackground: At present, the assessment of autonomy in daily living activities, one of the key symptoms in Alzheimer's disease (AD), involves clinical rating scales.
Methods: In total, 109 participants were included. In particular, 11 participants during a pre-test in Nice, France, and 98 participants (27 AD, 38 mild cognitive impairment-MCI-and 33 healthy controls-HC) in Thessaloniki, Greece, carried out a standardized scenario consisting of several instrumental activities of daily living (IADLs), such as making a phone call or preparing a pillbox while being recorded.
The land cover reconstruction from monochromatic historical aerial images is a challenging task that has recently attracted an increasing interest from the scientific community with the proliferation of large-scale epidemiological studies involving retrospective analysis of spatial patterns. However, the efforts made by the computer vision community in remote-sensing applications are mostly focused on prospective approaches through the analysis of high-resolution multi-spectral data acquired by the advanced spatial programs. Hence, four contributions are proposed in this paper.
View Article and Find Full Text PDFVisual activity recognition plays a fundamental role in several research fields as a way to extract semantic meaning of images and videos. Prior work has mostly focused on classification tasks, where a label is given for a video clip. However, real life scenarios require a method to browse a continuous video flow, automatically identify relevant temporal segments and classify them accordingly to target activities.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
August 2016
Combining multimodal concept streams from heterogeneous sensors is a problem superficially explored for activity recognition. Most studies explore simple sensors in nearly perfect conditions, where temporal synchronization is guaranteed. Sophisticated fusion schemes adopt problem-specific graphical representations of events that are generally deeply linked with their training data and focused on a single sensor.
View Article and Find Full Text PDFCurrently, the assessment of autonomy and functional ability involves clinical rating scales. However, scales are often limited in their ability to provide objective and sensitive information. By contrast, information and communication technologies may overcome these limitations by capturing more fully functional as well as cognitive disturbances associated with Alzheimer disease (AD).
View Article and Find Full Text PDFOver the last few years, the use of new technologies for the support of elderly people and in particular dementia patients received increasing interest. We investigated the use of a video monitoring system for automatic event recognition for the assessment of instrumental activities of daily living (IADL) in dementia patients. Participants (19 healthy subjects (HC) and 19 mild cognitive impairment (MCI) patients) had to carry out a standardized scenario consisting of several IADLs such as making a phone call while they were recorded by 2D video cameras.
View Article and Find Full Text PDFWe present a software (ETHOWATCHER(®)) developed to support ethography, object tracking and extraction of kinematic variables from digital video files of laboratory animals. The tracking module allows controlled segmentation of the target from the background, extracting image attributes used to calculate the distance traveled, orientation, length, area and a path graph of the experimental animal. The ethography module allows recording of catalog-based behaviors from environment or from video files continuously or frame-by-frame.
View Article and Find Full Text PDFBehavior studies on the neurobiological effects of environmental, pharmacological and physiological manipulations in lab animals try to correlate these procedures with specific changes in animal behavior. Parameters such as duration, latency and frequency are assessed from the visually recorded sequences of behaviors, to distinguish changes due to manipulation. Since behavioral recording procedure is intrinsically interpretative, high variability in experimental results is expected and usual, due to observer-related influences such as experience, knowledge, stress, fatigue and personal biases.
View Article and Find Full Text PDF