Estimating reliable geometric model parameters from the data with severe outliers is a fundamental and important task in computer vision. This paper attempts to sample high-quality subsets and select model instances to estimate parameters in the multi-structural data. To address this, we propose an effective method called Latent Semantic Consensus (LSC). The principle of LSC is to preserve the latent semantic consensus in both data points and model hypotheses. Specifically, LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses, respectively. Then, LSC explores the distributions of points in the two latent semantic spaces, to remove outliers, generate high-quality model hypotheses, and effectively estimate model instances. Finally, LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting, due to its deterministic fitting nature and efficiency. Compared with several state-of-the-art model fitting methods, our LSC achieves significant superiority for the performance of both accuracy and speed on synthetic data and real images.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2024.3376731DOI Listing

Publication Analysis

Top Keywords

latent semantic
20
model fitting
16
semantic consensus
12
model hypotheses
12
model
10
geometric model
8
model instances
8
data points
8
points model
8
hypotheses lsc
8

Similar Publications

Background: The information and decision support needs required to embed a patient-centred strategy are challenging, as several haemodialysis vascular access strategies are possible with significant differences in short- and long-term outcomes of potential treatment options. We aimed to explore and describe stakeholder perspectives on information needs when making decisions about vascular access (VA) for haemodialysis.

Methods: We performed thematic analysis of seven (six online, one in person) focus group discussions including transcripts, post-it phrases and text responses with 14 patients and 12 vascular access professionals (four nephrologists, three surgeons and five nurses - Vascular access nurse specialists/Education and dialysis nurses) who participated in at total of six online and one in person focus group.

View Article and Find Full Text PDF

Individuals with "agrammatic" receptive aphasia have long been known to rely on semantic plausibility rather than syntactic cues when interpreting sentences. In contrast to early interpretations of this pattern as indicative of a deficit in syntactic knowledge, a recent proposal views agrammatic comprehension as a case of "noisy-channel" language processing with an increased expectation of noise in the input relative to healthy adults. Here, we investigate the nature of the noise model in aphasia and whether it is adapted to the statistics of the environment.

View Article and Find Full Text PDF

In short-term ordered recall tasks, phonological similarity impedes item and order recall, while semantic similarity benefits item recall with a weak or null effect on order recall. Ishiguro and Saito recently suggested that these contradictory findings were due to an inadequate assessment of semantic similarity. They proposed a novel measure of semantic similarity based on the distance between items in a three-dimensional space composed of the semantic dimensions of valence, arousal, and dominance.

View Article and Find Full Text PDF

Multi-label zero-shot learning (ML-ZSL) strives to recognize all objects in an image, regardless of whether they are present in the training data. Recent methods incorporate an attention mechanism to locate labels in the image and generate class-specific semantic information. However, the attention mechanism built on visual features treats label embeddings equally in the prediction score, leading to severe semantic ambiguity.

View Article and Find Full Text PDF

Clustering short text is a difficult problem, owing to the low word co-occurrence between short text documents. This work shows that large language models (LLMs) can overcome the limitations of traditional clustering approaches by generating embeddings that capture the semantic nuances of short text. In this study, clusters are found in the embedding space using Gaussian mixture modelling.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!