Disruption to language lateralisation has been proposed as a cause of developmental language impairments. In this study, we tested the idea that consistency of lateralisation across different language functions is associated with language ability. A large sample of adults with variable language abilities (N = 67 with a developmental disorder affecting language and N = 37 controls) were recruited. Lateralisation was measured using functional transcranial Doppler sonography (fTCD) for three language tasks that engage different language subprocesses (phonological decision, semantic decision and sentence generation). The whole sample was divided into those with consistent versus inconsistent lateralisation across the three tasks. Language ability (using a battery of standardised tests) was compared between the consistent and inconsistent groups. The results did not show a significant effect of lateralisation consistency on language skills. However, of the 31 individuals showing inconsistent lateralisation, the vast majority (84%) were in the disorder group with only five controls showing such a pattern, a difference that was higher than would be expected by chance. The developmental disorder group also demonstrated weaker correlations between laterality indices across pairs of tasks. In summary, although the data did not support the hypothesis that inconsistent language lateralisation is a major cause of poor language skills, the results suggested that some subtypes of language disorder are associated with inefficient distribution of language functions between hemispheres. Inconsistent lateralisation could be a causal factor in the aetiology of language disorder or may arise in some cases as the consequence of developmental disorder, possibly reflective of compensatory reorganisation.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7078955PMC
http://dx.doi.org/10.1111/ejn.14623DOI Listing

Publication Analysis

Top Keywords

language
17
inconsistent lateralisation
16
language functions
12
developmental disorder
12
lateralisation
9
lateralisation language
8
language lateralisation
8
language ability
8
language skills
8
disorder group
8

Similar Publications

Psychometric properties of the English and Hindi versions of the Brief Inventory of Thriving for use among Indian adolescents.

Sci Rep

December 2024

Faculty of Education, Centre for Wellbeing Science, The University of Melbourne, Level 2, 100 Leicester Street, Carlton, VIC, 3010, Australia.

The Brief Inventory of Thriving (BIT) provides a holistic measure of well-being, but has only been validated for adults, and does not have a Hindi version. The present study investigated the unidimensional structure, internal consistency, convergent/discriminant, and criterion validity of both the original English version of the BIT (BIT-E) and its Hindi-translated version (BIT-H) among adolescents in India. Further, we tested measurement invariance across these two language versions, gender, and academic disciplines.

View Article and Find Full Text PDF

Accurate classification of logos is a challenging task in image recognition due to variations in logo size, orientation, and background complexity. Deep learning models, such as VGG16, have demonstrated promising results in handling such tasks. However, their performance is highly dependent on optimal hyperparameter settings, whose fine-tuning is both labor-intensive and time-consuming.

View Article and Find Full Text PDF

With breakthroughs in Natural Language Processing and Artificial Intelligence (AI), the usage of Large Language Models (LLMs) in academic research has increased tremendously. Models such as Generative Pre-trained Transformer (GPT) are used by researchers in literature review, abstract screening, and manuscript drafting. However, these models also present the attendant challenge of providing ethically questionable scientific information.

View Article and Find Full Text PDF

Evaluating large language models for criterion-based grading from agreement to consistency.

NPJ Sci Learn

December 2024

Department of Psychology, Jeffrey Cheah School of Medicine and Health Sciences, Monash University Malaysia, Bandar Sunway, 475000, Malaysia.

This study evaluates the ability of large language models (LLMs) to deliver criterion-based grading and examines the impact of prompt engineering with detailed criteria on grading. Using well-established human benchmarks and quantitative analyses, we found that even free LLMs achieve criterion-based grading with a detailed understanding of the criteria, underscoring the importance of domain-specific understanding over model complexity. These findings highlight the potential of LLMs to deliver scalable educational feedback.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!