Publications by authors named "Tuan Manh Lai"

Pretrained language models (PLMs) have demonstrated strong performance on many natural language processing (NLP) tasks. Despite their great success, these PLMs are typically pretrained only on unstructured free texts without leveraging existing structured knowledge bases that are readily available for many domains, especially scientific domains. As a result, these PLMs may not achieve satisfactory performance on knowledge-intensive tasks such as biomedical NLP.

View Article and Find Full Text PDF

Deep learning models have become the state-of-the-art for many tasks, from text sentiment analysis to facial image recognition. However, understanding why certain models perform better than others or how one model learns differently than another is often difficult yet critical for increasing their effectiveness, improving prediction accuracy, and enabling fairness. Traditional methods for comparing models' efficacy, such as accuracy, precision, and recall provide a quantitative view of performance; however, the qualitative intricacies of why one model performs better than another are hidden.

View Article and Find Full Text PDF