Publications by authors named "Qinquan Gao"

Purpose: The aim of this study was to investigate the causal relationship between low-density lipoprotein cholesterol (LDL-C) and five cancers (breast, cervical, thyroid, prostate and colorectal) using the Mendelian Randomization (MR) method, with a view to revealing the potential role of LDL-C in the development of these cancers.

Methods: We used gene variant data and disease data from the Genome-Wide Association Study (GWAS) database to assess the causal relationship between LDL-C and each cancer by Mendelian randomisation analysis methods such as inverse variance weighting and MR-Egger. Specifically, we selected Proprotein convertase subtilisin/kexin type 9 (PCSK9) and 3-hydroxy-3-methylglutaryl-CoA reductase (HMGCR), genes associated with LDL-C levels, as instrumental variables, extracted the corresponding single nucleotide polymorphism (SNP) data and analysed the associations of these SNPs with five cancers.

View Article and Find Full Text PDF

Despite the potential benefits of data augmentation for mitigating data insufficiency, traditional augmentation methods primarily rely on prior intra-domain knowledge. On the other hand, advanced generative adversarial networks (GANs) generate inter-domain samples with limited variety. These previous methods make limited contributions to describing the decision boundaries for binary classification.

View Article and Find Full Text PDF

Pathological examination of nasopharyngeal carcinoma (NPC) is an indispensable factor for diagnosis, guiding clinical treatment and judging prognosis. Traditional and fully supervised NPC diagnosis algorithms require manual delineation of regions of interest on the gigapixel of whole slide images (WSIs), which however is laborious and often biased. In this paper, we propose a weakly supervised framework based on Tokens-to-Token Vision Transformer (WS-T2T-ViT) for accurate NPC classification with only a slide-level label.

View Article and Find Full Text PDF

The goal of blind image super-resolution (BISR) is to recover the corresponding high-resolution image from a given low-resolution image with unknown degradation. Prior related research has primarily focused effectively on utilizing the kernel as prior knowledge to recover the high-frequency components of image. However, they overlooked the function of structural prior information within the same image, which resulted in unsatisfactory recovery performance for textures with strong self-similarity.

View Article and Find Full Text PDF

Medical image segmentation is a crucial and intricate process in medical image processing and analysis. With the advancements in artificial intelligence, deep learning techniques have been widely used in recent years for medical image segmentation. One such technique is the U-Net framework based on the U-shaped convolutional neural networks (CNN) and its variants.

View Article and Find Full Text PDF

Nucleic acid testing is currently the golden reference for coronaviruses (SARS-CoV-2) detection, while the SARS-CoV-2 antigen-detection rapid diagnostic tests (RDT) is an important adjunct. RDT can be widely used in the community or regional screening management as self-test tools and may need to be verified by healthcare authorities. However, manual verification of RDT results is a time-consuming task, and existing object detection algorithms usually suffer from high model complexity and computational effort, making them difficult to deploy.

View Article and Find Full Text PDF
Article Synopsis
  • A thyroid nodule is a lump in the thyroid gland that can indicate early thyroid cancer, and accurate segmentation of these nodules in ultrasound images is crucial for diagnosis and treatment.
  • The authors developed a new framework that includes a super-resolution reconstruction network to enhance image quality and an N-shape network for effective segmentation, utilizing advanced techniques like atrous spatial pyramid pooling and a parallel atrous convolution module.
  • Their method demonstrated superior performance on the UTNI-2021 dataset, achieving high metrics such as a Dice value of 91.9% and outperforming existing techniques in ultrasound image segmentation.
View Article and Find Full Text PDF

The application of deep learning in the medical field has continuously made huge breakthroughs in recent years. Based on convolutional neural network (CNN), the U-Net framework has become the benchmark of the medical image segmentation task. However, this framework cannot fully learn global information and remote semantic information.

View Article and Find Full Text PDF

Gastric cancer is the third most common cause of cancer-related death in the world. Human epidermal growth factor receptor 2 (HER2) positive is an important subtype of gastric cancer, which can provide significant diagnostic information for gastric cancer pathologists. However, pathologists usually use a semi-quantitative assessment method to assign HER2 scores for gastric cancer by repeatedly comparing hematoxylin and eosin (H&E) whole slide images (WSIs) with their HER2 immunohistochemical WSIs one by one under the microscope.

View Article and Find Full Text PDF

Automated thyroid nodule classification in ultrasound images is an important way to detect thyroid nodules and to make a more accurate diagnosis. In this paper, we propose a novel deep convolutional neural network (CNN) model, called n-ClsNet, for thyroid nodule classification. Our model consists of a multi-scale classification layer, multiple skip blocks, and a hybrid atrous convolution (HAC) block.

View Article and Find Full Text PDF

Alzheimer's disease (AD) is a progressive neurodegenerative disease, and mild cognitive impairment (MCI) is a transitional stage between normal control (NC) and AD. A multiclass classification of AD is a difficult task because there are multiple similarities between neighboring groups. The performance of classification can be improved by using multimodal data, but the improvement could be limited with inefficient fusion of multimodal data.

View Article and Find Full Text PDF

Combining multi-modality data for brain disease diagnosis such as Alzheimer's disease (AD) commonly leads to improved performance than those using a single modality. However, it is still challenging to train a multi-modality model since it is difficult in clinical practice to obtain complete data that includes all modality data. Generally speaking, it is difficult to obtain both magnetic resonance images (MRI) and positron emission tomography (PET) images of a single patient.

View Article and Find Full Text PDF

Identifying patients with mild cognitive impairment (MCI) who are at high risk of progressing to Alzheimer's disease (AD) is crucial for early treatment of AD. However, it is difficult to predict the cognitive states of patients. This study developed an extreme learning machine (ELM)-based grading method to efficiently fuse multimodal data and predict MCI-to-AD conversion.

View Article and Find Full Text PDF

Mild cognitive impairment (MCI) is the prodromal stage of Alzheimer's disease (AD). Identifying MCI subjects who are at high risk of converting to AD is crucial for effective treatments. In this study, a deep learning approach based on convolutional neural networks (CNN), is designed to accurately predict MCI-to-AD conversion with magnetic resonance imaging (MRI) data.

View Article and Find Full Text PDF

The combination external-beam radiotherapy and high-dose-rate brachytherapy is a standard form of treatment for patients with locally advanced uterine cervical cancer. Personalized radiotherapy in cervical cancer requires efficient and accurate dose planning and assessment across these types of treatment. To achieve radiation dose assessment, accurate mapping of the dose distribution from HDR-BT onto EBRT is extremely important.

View Article and Find Full Text PDF

Objective: Identifying mild cognitive impairment (MCI) subjects who will progress to Alzheimer's disease (AD) is not only crucial in clinical practice, but also has a significant potential to enrich clinical trials. The purpose of this study is to develop an effective biomarker for an accurate prediction of MCI-to-AD conversion from magnetic resonance images.

Methods: We propose a novel grading biomarker for the prediction of MCI-to-AD conversion.

View Article and Find Full Text PDF

An automated segmentation method is presented for multi-organ segmentation in abdominal CT images. Dictionary learning and sparse coding techniques are used in the proposed method to generate target specific priors for segmentation. The method simultaneously learns dictionaries which have reconstructive power and classifiers which have discriminative ability from a set of selected atlases.

View Article and Find Full Text PDF

Machine learning techniques have been widely used to detect morphological abnormalities from structural brain magnetic resonance imaging data and to support the diagnosis of neurological diseases such as dementia. In this paper, we propose to use a multiple instance learning (MIL) method in an application for the detection of Alzheimer's disease (AD) and its prodromal stage mild cognitive impairment (MCI). In our work, local intensity patches are extracted as features.

View Article and Find Full Text PDF

Machine learning techniques have been widely used to support the diagnosis of neurological diseases such as dementia. Recent approaches utilize local intensity patterns within patches to derive voxelwise grading measures of disease. However, the relationships among these patches are usually ignored.

View Article and Find Full Text PDF

Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g.

View Article and Find Full Text PDF

A fundamental challenge in the development of image-guided surgical systems is alignment of the preoperative model to the operative view of the patient. This is achieved by finding corresponding structures in the preoperative scans and on the live surgical scene. In robot-assisted laparoscopic prostatectomy (RALP), the most readily visible structure is the bone of the pelvic rim.

View Article and Find Full Text PDF