RoFace: A robust face representation approach for accurate classification.

Heliyon

Department of Electrical and Telecommunication Engineering, National Advanced School of Engineering, Yaounde, Cameroon.

Published: February 2023

The recent development of technological applications has made it inevitable to replicate human eyesight talents artificially and the issues requiring particular attention in the ideas of solutions increase in proportion to the number of applications. Facial classification in admittance restriction and video inspection is typically amidst the open-ended applications, where suitable models have been offered to meet users' needs. While it is true that subsequent efforts have led to the proposal of powerful facial recognition models, limiting factors affecting the quality of the results are always considered. These include low-resolution images, partial occlusion of faces and defense against adversarial attacks. The aspect of the input image, the verification of the presence of face occlusion in the image, the motive derived from the image, and the ability to fend off adversarial attacks are all examined by the RoFace formal representation of the face, which is presented in this paper as a solution to these issues. To assess the impact of these components on the classification/recognition accuracy, experiments have been conducted.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9900511PMC
http://dx.doi.org/10.1016/j.heliyon.2023.e13053DOI Listing

Publication Analysis

Top Keywords

adversarial attacks
8
roface robust
4
robust face
4
face representation
4
representation approach
4
approach accurate
4
accurate classification
4
classification development
4
development technological
4
technological applications
4

Similar Publications

Microgrid systems have evolved based on renewable energies including wind, solar, and hydrogen to make the satisfaction of loads far from the main grid more flexible and controllable using both island- and grid-connected modes. Albeit microgrids can gain beneficial results in cost and energy schedules once operating in grid-connected mode, such systems are vulnerable to malicious attacks from the viewpoint of cybersecurity. With this in mind, this paper explores a novel advanced attack model named the false transferred data injection (FTDI) attack aiming to manipulatively alter the power flowing from the microgrid to the upstream grid to raise voltage usability probability.

View Article and Find Full Text PDF

Background: Large language model (LLM) artificial intelligence chatbots using generative language can offer smoking cessation information and advice. However, little is known about the reliability of the information provided to users.

Objective: This study aims to examine whether 3 ChatGPT chatbots-the World Health Organization's Sarah, BeFreeGPT, and BasicGPT-provide reliable information on how to quit smoking.

View Article and Find Full Text PDF

Ethical and security challenges in AI for forensic genetics: From bias to adversarial attacks.

Forensic Sci Int Genet

January 2025

Computer Science Department, University of Buenos Aires, Faculty of Exact and Natural Sciences, Buenos Aires, Argentina.

Forensic scientists play a crucial role in assigning probabilities to evidence based on competing hypotheses, which is fundamental in legal contexts where propositions are presented usually by prosecution and defense. The likelihood ratio (LR) is a well-established metric for quantifying the statistical weight of the evidence, facilitating the comparison of probabilities under these hypotheses. Developing accurate LR models is inherently complex, as it relies on cumulative scientific knowledge.

View Article and Find Full Text PDF

Object detection in images is a fundamental component of many safety-critical systems, such as autonomous driving, video surveillance systems, and robotics. Adversarial patch attacks, being easily implemented in the real world, provide effective counteraction to object detection by state-of-the-art neural-based detectors. It poses a serious danger in various fields of activity.

View Article and Find Full Text PDF

The increasing reliance on deep neural network-based object detection models in various applications has raised significant security concerns due to their vulnerability to adversarial attacks. In physical 3D environments, existing adversarial attacks that target object detection (3D-AE) face significant challenges. These attacks often require large and dispersed modifications to objects, making them easily noticeable and reducing their effectiveness in real-world scenarios.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!