Three pilot studies ( = 832) revealed that people held more positive attitudes toward targets wearing protective face masks. Therefore, we examined whether knowledge of this self-presentational benefit would increase people's intentions to wear face masks. Participants ( = 997) were randomly assigned to read a passage about the COVID-19 pandemic, the safety benefit of mask-wearing, the self-presentational benefit of mask-wearing, or a combination of the latter two. Although this manipulation failed, findings revealed that preexisting beliefs about masked targets being more likable were positively associated with mask-wearing intentions, particularly among participants less concerned with disease or more politically conservative.

Download full-text PDF

Source
http://dx.doi.org/10.1080/00224545.2023.2216880DOI Listing

Publication Analysis

Top Keywords

face masks
12
self-presentational benefit
8
benefit mask-wearing
8
leveraging impression
4
impression management
4
management motives
4
motives increase
4
increase face
4
masks three
4
three pilot
4

Similar Publications

Sterilization and Filter Performance of Nano- and Microfibrous Facemask Filters - Electrospinning and Restoration of Charges for Competitive Sustainable Alternatives.

Macromol Rapid Commun

December 2024

Empa, Swiss Federal Laboratories for Materials Science and Technology, Laboratory for Biomimetic Membranes and Textiles, St. Gallen, 9014, Switzerland.

Facemask materials have been under constant development to optimize filtration performance, wear comfort, and general resilience to chemical and mechanical stress. While single-use polypropylene meltblown membranes are the established go-to material for high-performing mask filters, they are neither sustainable nor particularly resistant to sterilization methods. Herein an in-depth analysis is provided of the sterilization efficiency, filtration efficiency, and breathing resistance of selected aerosol filters commonly implemented in facemasks, with a particular focus on the benefits of nanofibrous filters.

View Article and Find Full Text PDF

Evaluating Medical Image Segmentation Models Using Augmentation.

Tomography

December 2024

Clinic for Radiology and Nuclear Medicine, University Hospital, Goethe University Frankfurt, Theodor-Stern-Kai 7, 60590 Frankfurt am Main, Germany.

Background: Medical imagesegmentation is an essential step in both clinical and research applications, and automated segmentation models-such as TotalSegmentator-have become ubiquitous. However, robust methods for validating the accuracy of these models remain limited, and manual inspection is often necessary before the segmentation masks produced by these models can be used.

Methods: To address this gap, we have developed a novel validation framework for segmentation models, leveraging data augmentation to assess model consistency.

View Article and Find Full Text PDF

Towards Robust Supervised Pectoral Muscle Segmentation in Mammography Images.

J Imaging

December 2024

Computer Science and Engineering Department, College of Engineering, University of Nevada, Reno, Main Campus, Reno, NV 89557, USA.

Mammography images are the most commonly used tool for breast cancer screening. The presence of pectoral muscle in images for the mediolateral oblique view makes designing a robust automated breast cancer detection system more challenging. Most of the current methods for removing the pectoral muscle are based on traditional machine learning approaches.

View Article and Find Full Text PDF

This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red-green-blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities.

View Article and Find Full Text PDF

IngredSAM: Open-World Food Ingredient Segmentation via a Single Image Prompt.

J Imaging

November 2024

Architecture and Design College, Nanchang University, No. 999, Xuefu Avenue, Honggutan New District, Nanchang 330031, China.

Food semantic segmentation is of great significance in the field of computer vision and artificial intelligence, especially in the application of food image analysis. Due to the complexity and variety of food, it is difficult to effectively handle this task using supervised methods. Thus, we introduce IngredSAM, a novel approach for open-world food ingredient semantic segmentation, extending the capabilities of the Segment Anything Model (SAM).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!