A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 143

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 143
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 209
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3098
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 574
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 488
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

A PHP Error was encountered

Severity: Warning

Message: Attempt to read property "Count" on bool

Filename: helpers/my_audit_helper.php

Line Number: 3100

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3100
Function: _error_handler

File: /var/www/html/application/controllers/Detail.php
Line: 574
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 488
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Scoring reading parameters: An inter-rater reliability study using the MNREAD chart. | LitMetric

AI Article Synopsis

  • The study aimed to assess how consistently different human raters evaluate the reading performance of visually impaired individuals using the MNREAD acuity chart, and to compare these ratings with results from computer-based algorithms.
  • The research involved measuring the reading performance of 101 low-vision individuals, where seven raters estimated maximum reading speed (MRS) and critical print size (CPS), while two algorithms calculated these metrics automatically.
  • Results showed excellent agreement for MRS among raters and algorithms, but varied levels of agreement for CPS, with less experienced raters showing lower reliability and computer algorithms performing differently compared to human raters.

Article Abstract

Purpose: First, to evaluate inter-rater reliability when human raters estimate the reading performance of visually impaired individuals using the MNREAD acuity chart. Second, to evaluate the agreement between computer-based scoring algorithms and compare them with human rating.

Methods: Reading performance was measured for 101 individuals with low vision, using the Portuguese version of the MNREAD test. Seven raters estimated the maximum reading speed (MRS) and critical print size (CPS) of each individual MNREAD curve. MRS and CPS were also calculated automatically for each curve using two different algorithms: the original standard deviation method (SDev) and a non-linear mixed effects (NLME) modeling. Intra-class correlation coefficients (ICC) were used to estimate absolute agreement between raters and/or algorithms.

Results: Absolute agreement between raters was 'excellent' for MRS (ICC = 0.97; 95%CI [0.96, 0.98]) and 'moderate' to 'good' for CPS (ICC = 0.77; 95%CI [0.69, 0.83]). For CPS, inter-rater reliability was poorer among less experienced raters (ICC = 0.70; 95%CI [0.57, 0.80]) when compared to experienced ones (ICC = 0.82; 95%CI [0.76, 0.88]). Absolute agreement between the two algorithms was 'excellent' for MRS (ICC = 0.96; 95%CI [0.91, 0.98]). For CPS, the best possible agreement was found for CPS defined as the print size sustaining 80% of MRS (ICC = 0.77; 95%CI [0.68, 0.84]). Absolute agreement between raters and automated methods was 'excellent' for MRS (ICC = 0.96; 95% CI [0.88, 0.98] for SDev; ICC = 0.97; 95% CI [0.95, 0.98] for NLME). For CPS, absolute agreement between raters and SDev ranged from 'poor' to 'good' (ICC = 0.66; 95% CI [0.3, 0.80]), while agreement between raters and NLME was 'good' (ICC = 0.83; 95% CI [0.76, 0.88]).

Conclusion: For MRS, inter-rater reliability is excellent, even considering the possibility of noisy and/or incomplete data collected in low-vision individuals. For CPS, inter-rater reliability is lower. This may be problematic, for instance in the context of multisite investigations or follow-up examinations. The NLME method showed better agreement with the raters than the SDev method for both reading parameters. Setting up consensual guidelines to deal with ambiguous curves may help improve reliability. While the exact definition of CPS should be chosen on a case-by-case basis depending on the clinician or researcher's motivations, evidence suggests that estimating CPS as the smallest print size sustaining about 80% of MRS would increase inter-rater reliability.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6555504PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0216775PLOS

Publication Analysis

Top Keywords

inter-rater reliability
24
agreement raters
24
absolute agreement
20
print size
12
'excellent' icc
12
icc
11
cps
10
raters
9
agreement
9
reading parameters
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!

A PHP Error was encountered

Severity: Notice

Message: fwrite(): Write of 34 bytes failed with errno=28 No space left on device

Filename: drivers/Session_files_driver.php

Line Number: 272

Backtrace:

A PHP Error was encountered

Severity: Warning

Message: session_write_close(): Failed to write session data using user defined save handler. (session.save_path: /var/lib/php/sessions)

Filename: Unknown

Line Number: 0

Backtrace: