Background: Point-of-care diagnostic devices, such as lateral-flow assays, are becoming widely used by the public. However, efforts to ensure correct assay operation and result interpretation rely on hardware that cannot be easily scaled or image processing approaches requiring large training datasets, necessitating large numbers of tests and expert labeling with validated specimens for every new test kit format.
Methods: We developed a software architecture called AutoAdapt POC that integrates automated membrane extraction, self-supervised learning, and few-shot learning to automate the interpretation of POC diagnostic tests using smartphone cameras in a scalable manner.
Although a range of pharmacological interventions is available, it remains uncertain which treatment for osteoporosis is more effective. This network meta-analysis study aimed to compare different drug efficacy and safety in randomized controlled trials (RCTs) for the treatment of postmenopausal osteoporosis. PubMed, EMBASE, MEDLINE, Clinicaltrial.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
February 2022
Deep embedding learning plays a key role in learning discriminative feature representations, where the visually similar samples are pulled closer and dissimilar samples are pushed away in the low-dimensional embedding space. This paper studies the unsupervised embedding learning problem by learning such a representation without using any category labels. This task faces two primary challenges: mining reliable positive supervision from highly similar fine-grained classes, and generalizing to unseen testing categories.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
January 2021
We focus on grounding (i.e., localizing or linking) referring expressions in images, e.
View Article and Find Full Text PDFIEEE Trans Image Process
April 2019
Image annotation aims to annotate a given image with a variable number of class labels corresponding to diverse visual concepts. In this paper, we address two main issues in large-scale image annotation: 1) how to learn a rich feature representation suitable for predicting a diverse set of visual concepts ranging from object, scene to abstract concept and 2) how to annotate an image with the optimal number of class labels. To address the first issue, we propose a novel multi-scale deep model for extracting rich and discriminative features capable of representing a wide range of visual concepts.
View Article and Find Full Text PDFTo overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
February 2018
In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
November 2015
Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions.
View Article and Find Full Text PDFLate fusion is one of the most effective approaches to enhance recognition accuracy through combining prediction scores of multiple classifiers, each of which is trained by a specific feature or model. The existing methods generally use a fixed fusion weight for one classifier over all samples, and ignore the fact that each classifier may perform better or worse for different subsets of samples. In order to address this issue, we propose a novel sample specific late fusion (SSLF) method.
View Article and Find Full Text PDFFront Neural Circuits
October 2012
Neurons have complex axonal and dendritic morphologies that are the structural building blocks of neural circuits. The traditional method to capture these morphological structures using manual reconstructions is time-consuming and partly subjective, so it appears important to develop automatic or semi-automatic methods to reconstruct neurons. Here we introduce a fast algorithm for tracking neural morphologies in 3D with simultaneous detection of branching processes.
View Article and Find Full Text PDFIEEE Trans Image Process
June 2012
Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g.
View Article and Find Full Text PDFWe describe a closed-loop brain-computer interface that re-ranks an image database by iterating between user generated 'interest' scores and computer vision generated visual similarity measures. The interest scores are based on decoding the electroencephalographic (EEG) correlates of target detection, attentional shifts and self-monitoring processes, which result from the user paying attention to target images interspersed in rapid serial visual presentation (RSVP) sequences. The highest scored images are passed to a semi-supervised computer vision system that reorganizes the image database accordingly, using a graph-based representation that captures visual similarity between images.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
March 2011
Our group has been investigating the development of BCI systems for improving information delivery to a user, specifically systems for triaging image content based on what captures a user's attention. One of the systems we have developed uses single-trial EEG scores as noisy labels for a computer vision image retrieval system. In this paper we investigate how the noisy nature of the EEG-derived labels affects the resulting accuracy of the computer vision system.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
October 2009
The success of bilinear subspace learning heavily depends on reducing correlations among features along rows and columns of the data matrices. In this work, we study the problem of rearranging elements within a matrix in order to maximize these correlations so that information redundancy in matrix data can be more extensively removed by existing bilinear subspace learning algorithms. An efficient iterative algorithm is proposed to tackle this essentially integer programming problem.
View Article and Find Full Text PDFConventional biomarker discovery focuses mostly on the identification of single markers and thus often has limited success in disease diagnosis and prognosis. This study proposes a method to identify an optimized protein biomarker panel based on MS studies for predicting the risk of major adverse cardiac events (MACE) in patients. Since the simplicity and concision requirement for the development of immunoassays can only tolerate the complexity of the prediction model with a very few selected discriminative biomarkers, established optimization methods, such as conventional genetic algorithm (GA), thus fails in the high-dimensional space.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
November 2008
In this work, we systematically study the problem of event recognition in unconstrained news video sequences. We adopt the discriminative kernel-based method for which video clip similarity plays an important role. First, we represent a video clip as a bag of orderless descriptors extracted from all of the constituent frames and apply the earth mover's distance (EMD) to integrate similarities among frames from two clips.
View Article and Find Full Text PDFWith recent advances in fluorescence microscopy imaging techniques and methods of gene knock down by RNA interference (RNAi), genome-scale high-content screening (HCS) has emerged as a powerful approach to systematically identify all parts of complex biological processes. However, a critical barrier preventing fulfillment of the success is the lack of efficient and robust methods for automating RNAi image analysis and quantitative evaluation of the gene knock down effects on huge volume of HCS data. Facing such opportunities and challenges, we have started investigation of automatic methods towards the development of a fully automatic RNAi-HCS system.
View Article and Find Full Text PDFGenome-wide, cell-based screens using high-content screening (HCS) techniques and automated fluorescence microscopy generate thousands of high-content images that contain an enormous wealth of cell biological information. Such screens are key to the analysis of basic cell biological principles, such as control of cell cycle and cell morphology. However, these screens will ultimately only shed light on human disease mechanisms and potential cures if the analysis can keep up with the generation of data.
View Article and Find Full Text PDF