Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
In Few-Shot Learning (FSL), the objective is to correctly recognize new samples from novel classes with only a few available samples per class. Existing methods in FSL primarily focus on learning transferable knowledge from base classes by maximizing the information between feature representations and their corresponding labels. However, this approach may suffer from the "supervision collapse" issue, which arises due to a bias towards the base classes. In this paper, we propose a solution to address this issue by preserving the intrinsic structure of the data and enabling the learning of a generalized model for the novel classes. Following the InfoMax principle, our approach maximizes two types of mutual information (MI): between the samples and their feature representations, and between the feature representations and their class labels. This allows us to strike a balance between discrimination (capturing class-specific information) and generalization (capturing common characteristics across different classes) in the feature representations. To achieve this, we adopt a unified framework that perturbs the feature embedding space using two low-bias estimators. The first estimator maximizes the MI between a pair of intra-class samples, while the second estimator maximizes the MI between a sample and its augmented views. This framework effectively combines knowledge distillation between class-wise pairs and enlarges the diversity in feature representations. By conducting extensive experiments on popular FSL benchmarks, our proposed approach achieves comparable performances with state-of-the-art competitors. For example, we achieved an accuracy of 69.53% on the miniImageNet dataset and 77.06% on the CIFAR-FS dataset for the 5-way 1-shot task.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2023.3328475 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!