Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
In this paper, we propose an anycost network quantization method for efficient image super-resolution with variable resource budgets. Conventional quantization approaches acquire discrete network parameters for deployment with fixed complexity constraints, while image super-resolution networks are usually applied on mobile devices with frequently modified resource budgets due to the change of battery levels or computing chips. Hence, exhaustively optimizing quantized networks with each complexity constraint results in unacceptable training costs. On the contrary, we construct a hyper-network whose parameters can efficiently adapt to different resource budgets with negligible finetuning cost, so that the image super-resolution networks can be feasibly deployed in diversified devices with variable resource budgets. Specifically, we dynamically search the optimal bitwidth for each patch in convolution according to feature maps and complexity constraints, which aims to achieve the best efficiency-accuracy trade-off in image super-resolution given the resource budget. To acquire the hyper-network that can be efficiently adapted to different bitwidth settings, we actively sample the patch-wise bitwidth during training and adaptively ensemble gradients from hyper-network in different precision for faster convergence and higher generalization ability. Compared with existing quantization methods, experimental results demonstrate that our method significantly reduces the cost of adapting models in new resource budgets with comparable efficiency-accuracy trade-offs.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2024.3368959 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!