Publications by authors named "J. Jessy Li"

Background: Primary progressive aphasia (PPA) is a language‐led dementia associated with underlying Alzheimer’s disease (AD) or frontotemporal lobar degeneration pathology. As part of the Alzheimer’s spectrum, logopenic (lv) PPA may be particularly difficult to distinguish from amnestic AD, due to overlapping clinical features. Analysis of linguistic and acoustic variables derived from connected speech has shown promise as a diagnostic tool for differentiating dementia subtypes.

View Article and Find Full Text PDF

Background: Primary progressive aphasia (PPA) is a language‐led dementia associated with underlying Alzheimer’s disease (AD) or frontotemporal lobar degeneration pathology. As part of the Alzheimer’s spectrum, logopenic (lv) PPA may be particularly difficult to distinguish from amnestic AD, due to overlapping clinical features. Analysis of linguistic and acoustic variables derived from connected speech has shown promise as a diagnostic tool for differentiating dementia subtypes.

View Article and Find Full Text PDF

Large language models, particularly GPT-3, are able to produce high quality summaries of general domain news articles in few- and zero-shot settings However, it is unclear if such models are similarly capable in more specialized, high-stakes domains such as biomedicine. In this paper, we enlist domain experts (individuals with medical training) to evaluate summaries of biomedical articles generated by GPT-3, given zero supervision. We consider both single- and multi-document settings.

View Article and Find Full Text PDF

Automated models aim to make input texts more readable. Such methods have the potential to make complex information accessible to a wider audience, e.g.

View Article and Find Full Text PDF

We consider the problem of learning to simplify medical texts. This is important because most reliable, up-to-date information in biomedicine is dense with jargon and thus practically inaccessible to the lay audience. Furthermore, manual simplification does not scale to the rapidly growing body of biomedical literature, motivating the need for automated approaches.

View Article and Find Full Text PDF

Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant.

View Article and Find Full Text PDF