The recent development of technological applications has made it inevitable to replicate human eyesight talents artificially and the issues requiring particular attention in the ideas of solutions increase in proportion to the number of applications. Facial classification in admittance restriction and video inspection is typically amidst the open-ended applications, where suitable models have been offered to meet users' needs. While it is true that subsequent efforts have led to the proposal of powerful facial recognition models, limiting factors affecting the quality of the results are always considered. These include low-resolution images, partial occlusion of faces and defense against adversarial attacks. The aspect of the input image, the verification of the presence of face occlusion in the image, the motive derived from the image, and the ability to fend off adversarial attacks are all examined by the RoFace formal representation of the face, which is presented in this paper as a solution to these issues. To assess the impact of these components on the classification/recognition accuracy, experiments have been conducted.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9900511 | PMC |
http://dx.doi.org/10.1016/j.heliyon.2023.e13053 | DOI Listing |
Sci Rep
January 2025
Khuzestan Water & Power Authority (KWPA), Ahvaz, Iran.
Microgrid systems have evolved based on renewable energies including wind, solar, and hydrogen to make the satisfaction of loads far from the main grid more flexible and controllable using both island- and grid-connected modes. Albeit microgrids can gain beneficial results in cost and energy schedules once operating in grid-connected mode, such systems are vulnerable to malicious attacks from the viewpoint of cybersecurity. With this in mind, this paper explores a novel advanced attack model named the false transferred data injection (FTDI) attack aiming to manipulatively alter the power flowing from the microgrid to the upstream grid to raise voltage usability probability.
View Article and Find Full Text PDFJ Med Internet Res
January 2025
Department of Engineering Management and Systems Engineering, George Washington University, Washington, DC, United States.
Background: Large language model (LLM) artificial intelligence chatbots using generative language can offer smoking cessation information and advice. However, little is known about the reliability of the information provided to users.
Objective: This study aims to examine whether 3 ChatGPT chatbots-the World Health Organization's Sarah, BeFreeGPT, and BasicGPT-provide reliable information on how to quit smoking.
Forensic Sci Int Genet
January 2025
Computer Science Department, University of Buenos Aires, Faculty of Exact and Natural Sciences, Buenos Aires, Argentina.
Forensic scientists play a crucial role in assigning probabilities to evidence based on competing hypotheses, which is fundamental in legal contexts where propositions are presented usually by prosecution and defense. The likelihood ratio (LR) is a well-established metric for quantifying the statistical weight of the evidence, facilitating the comparison of probabilities under these hypotheses. Developing accurate LR models is inherently complex, as it relies on cumulative scientific knowledge.
View Article and Find Full Text PDFJ Imaging
January 2025
Science and Research Department, Moscow Technical University of Communications and Informatics, 111024 Moscow, Russia.
Object detection in images is a fundamental component of many safety-critical systems, such as autonomous driving, video surveillance systems, and robotics. Adversarial patch attacks, being easily implemented in the real world, provide effective counteraction to object detection by state-of-the-art neural-based detectors. It poses a serious danger in various fields of activity.
View Article and Find Full Text PDFJ Imaging
January 2025
Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
The increasing reliance on deep neural network-based object detection models in various applications has raised significant security concerns due to their vulnerability to adversarial attacks. In physical 3D environments, existing adversarial attacks that target object detection (3D-AE) face significant challenges. These attacks often require large and dispersed modifications to objects, making them easily noticeable and reducing their effectiveness in real-world scenarios.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!