Emotion Recognition of Online Education Learners by Convolutional Neural Networks.

Comput Intell Neurosci

School of Big Data, Fuzhou University of International Studies and Trade, Fuzhou 350202, Fujian, China.

Published: June 2022

At present, the facial expression recognition model in video communication has problems such as weak network generalization ability and complex model structure, which leads to a large amount of computation. Firstly, the Inception architecture is adopted as a design philosophy. The Visual Geometry Group Network (VGGNet) model is improved. Multiscale kernel convolutional layers are constructed to obtain more expressive features. Secondly, the attention mechanism is integrated into a multiscale feature fusion network to form a multiattention mechanism Convolutional Neural Network (CNN) model. Novel spatial and multichannel attention models are designed. The effects of redundant information and noise are reduced. Finally, experiments are carried out on the Fer2013 dataset and the Extended Cohn-Kanade Dataset (CK+) to verify the detection accuracy of the model. The results show that the Delivered Duty Unpaid (DDU) loss can be used for facial expression recognition in complex environments. After the attention module is added, the overall recognition accuracy of the network on Fer2013 and CK+ has been improved to varying degrees. The addition of the channel attention module has a more obvious effect on the recognition accuracy compared with the spatial attention module. The addition of the attention module enables the network to increase the attention to error-prone samples. The improved network model can better extract the key features of facial expressions, enhance the feature discrimination ability, and improve the recognition accuracy of error-prone expressions. The accuracy rate of facial expression recognition with larger movements is over 98%. Facial expressions are an important way of communication between people, and online video has greatly limited this communication method. The proposed CNN model based on multiscale feature fusion will effectively solve these network limitations and have an important and positive impact on future network information exchange.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9203173PMC
http://dx.doi.org/10.1155/2022/4316812DOI Listing

Publication Analysis

Top Keywords

attention module
16
facial expression
12
expression recognition
12
recognition accuracy
12
network
9
convolutional neural
8
multiscale feature
8
feature fusion
8
cnn model
8
facial expressions
8

Similar Publications

Object detection in motion management scenarios based on deep learning.

PLoS One

January 2025

School of Physical Education, Jinjiang College, Sichuan University, Chengdu, Sichuan Province, People's Republic of China.

In athletes' competitions and daily training, in order to further strengthen the athletes' sports level, it is usually necessary to analyze the athletes' sports actions at a specific moment, in which it is especially important to quickly and accurately identify the categories and positions of the athletes, sports equipment, field boundaries and other targets in the sports scene. However, the existing detection methods failed to achieve better detection results, and the analysis found that the reasons for this phenomenon mainly lie in the loss of temporal information, multi-targeting, target overlap, and coupling of regression and classification tasks, which makes it more difficult for these network models to adapt to the detection task in this scenario. Based on this, we propose for the first time a supervised object detection method for scenarios in the field of motion management.

View Article and Find Full Text PDF

As combination therapy becomes more common in clinical applications, predicting adverse effects of combination medications is a challenging task. However, there are three limitations of the existing prediction models. First, they rely on a single view of the drug and cannot fully utilize multiview information, resulting in limited performance when capturing complex structures.

View Article and Find Full Text PDF

Introduction: Pests are important factors affecting the growth of cotton, and it is a challenge to accurately detect cotton pests under complex natural conditions, such as low-light environments. This paper proposes a low-light environments cotton pest detection method, DCP-YOLOv7x, based on YOLOv7x, to address the issues of degraded image quality, difficult feature extraction, and low detection precision of cotton pests in low-light environments.

Methods: The DCP-YOLOv7x method first enhances low-quality cotton pest images using FFDNet (Fast and Flexible Denoising Convolutional Neural Network) and the EnlightenGAN low-light image enhancement network.

View Article and Find Full Text PDF

Forest pest monitoring and early warning using UAV remote sensing and computer vision techniques.

Sci Rep

January 2025

College of Computer and Control Engineering, Northeast Forestry University, Haerbin, 150040, Heilongjiang, China.

Unmanned aerial vehicle (UAV) remote sensing has revolutionized forest pest monitoring and early warning systems. However, the susceptibility of UAV-based object detection models to adversarial attacks raises concerns about their reliability and robustness in real-world deployments. To address this challenge, we propose SC-RTDETR, a novel framework for secure and robust object detection in forest pest monitoring using UAV imagery.

View Article and Find Full Text PDF

Weakly supervised deep learning-based classification for histopathology of gliomas: a single center experience.

Sci Rep

January 2025

Department of Neurosurgery, West China Hospital, Sichuan University, 37 Guoxue Avenue, Chengdu, 610041, People's Republic of China.

Multiple artificial intelligence systems have been created to facilitate accurate and prompt histopathological diagnosis of tumors using hematoxylin-eosin-stained slides. We aimed to investigate whether weakly supervised deep learning can aid in glioma diagnosis. We analyzed 472 whole slide images (WSIs) from 226 patients in West China Hospital (WCH) and 1604 WSIs from 880 patients in The Cancer Genome Atlas (TCGA).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!