Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user's intent for steering machine learning models. We explore using data and visual design probes to elicit users' desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TVCG.2023.3322898 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!