Because of the increasing ease of video capture, many millions of consumers create and upload large volumes of User-Generated-Content (UGC) videos to social and streaming media sites over the Internet. UGC videos are commonly captured by naive users having limited skills and imperfect techniques, and tend to be afflicted by mixtures of highly diverse in-capture distortions. These UGC videos are then often uploaded for sharing onto cloud servers, where they are further compressed for storage and transmission. Our paper tackles the highly practical problem of predicting the quality of compressed videos (perhaps during the process of compression, to help guide it), with only (possibly severely) distorted UGC videos as references. To address this problem, we have developed a novel Video Quality Assessment (VQA) framework that we call 1stepVQA (to distinguish it from two-step methods that we discuss). 1stepVQA overcomes limitations of Full-Reference, Reduced-Reference and No-Reference VQA models by exploiting the statistical regularities of both natural videos and distorted videos. We also describe a new dedicated video database, which was created by applying a realistic VMAF-Guided perceptual rate distortion optimization (RDO) criterion to create realistically compressed versions of UGC source videos, which typically have pre-existing distortions. We show that 1stepVQA is able to more accurately predict the quality of compressed videos, given imperfect reference videos, and outperforms other VQA models in this scenario.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2021.3107213 | DOI Listing |
Sci Data
October 2023
Biomedical Engineering Section, University of Reading, RG6 6DH, Reading, UK.
Embedding sensors into clothing is promising as a way for people to wear multiple sensors easily, for applications such as long-term activity monitoring. To our knowledge, this is the first published dataset collected from sensors in loose clothing. 6 Inertial Measurement Units (IMUs) were configured as a 'sensor string' and attached to casual trousers such that there were three sensors on each leg near the waist, thigh, and ankle/lower-shank.
View Article and Find Full Text PDFSci Rep
September 2023
College of Basic Medical, Fourth Military Medical University, Xi'an, 710032, China.
Patients narratives are being recorded increasingly frequently and spontaneously in short user produced content (UGC) films, which may have an impact on the vlogger's health as well as the public's comprehension of the relevant health concerns. This paper addressed three research questions regarding the population characteristics of UGC video publishers, the narrative theme of the videos, and the emotional orientation of the commenters. This study aimed to deepen our understanding of COVID-19 patients' narrative intentions and emotional needs through the theoretical frameworks of theory of planned behavior (TPB) and negative dominance theory (NDT).
View Article and Find Full Text PDFIEEE Trans Image Process
July 2023
In recent years, User Generated Content (UGC) has grown dramatically in video sharing applications. It is necessary for service-providers to use video quality assessment (VQA) to monitor and control users' Quality of Experience when watching UGC videos. However, most existing UGC VQA studies only focus on the visual distortions of videos, ignoring that the perceptual quality also depends on the accompanying audio signals.
View Article and Find Full Text PDFSensors (Basel)
May 2023
Department of Human-Centered Artificial Intelligence, Sangmyung University, Seoul 03016, Republic of Korea.
In the era of user-generated content (UGC) and virtual interactions within the metaverse, empathic digital content has become increasingly important. This study aimed to quantify human empathy levels when exposed to digital media. To assess empathy, we analyzed brain wave activity and eye movements in response to emotional videos.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!