Online social networks (OSNs) have rapidly become a prominent and widely used service, offering a wealth of personal and sensitive information with significant security and privacy implications. Hence, OSNs are also an important--and popular--subject for research. To perform research based on real-life evidence, however, researchers may need to access OSN data, such as texts and files uploaded by users and connections among users. This raises significant ethical problems. Currently, there are no clear ethical guidelines, and researchers may end up (unintentionally) performing ethically questionable research, sometimes even when more ethical research alternatives exist. For example, several studies have employed "fake identities" to collect data from OSNs, but fake identities may be used for attacks and are considered a security issue. Is it legitimate to use fake identities for studying OSNs or for collecting OSN data for research? We present a taxonomy of the ethical challenges facing researchers of OSNs and compare different approaches. We demonstrate how ethical considerations have been taken into account in previous studies that used fake identities. In addition, several possible approaches are offered to reduce or avoid ethical misconducts. We hope this work will stimulate the development and use of ethical practices and methods in the research of online social networks.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s11948-013-9473-0 | DOI Listing |
Sensors (Basel)
December 2024
Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea.
In this paper, we propose a Proof-of-Location (PoL)-based location verification scheme for mitigating Sybil attacks in vehicular ad hoc networks (VANETs). For this purpose, we employ smart contracts for storing the location information of the vehicles. This smart contract is maintained by Road Side Units (RSUs) and acts as a ground truth for verifying the position information of the neighboring vehicles.
View Article and Find Full Text PDFBMC Psychol
November 2024
Centre for Psychological Innovation and Research, Faculty of Psychology Universitas Padjadjaran, West Java, 45363, Indonesia.
The proliferation of fake news on social media platforms has become a significant concern, influencing public opinion, political decisions, and societal trust. While much research has focused on the technological and algorithmic factors behind the spread of misinformation, less attention has been given to the psychological drivers that contribute to the creation and dissemination of fake news. Cognitive biases, emotional appeals, and social identity motivations are believed to play a crucial role in shaping user behaviour on social media, yet there is limited systematic understanding of how these psychological factors intersect with online information sharing.
View Article and Find Full Text PDFPeerJ Comput Sci
June 2024
Department of Computer Engineering, Gazi University Ankara, Ankara, Türkiye.
Images and videos containing fake faces are the most common type of digital manipulation. Such content can lead to negative consequences by spreading false information. The use of machine learning algorithms to produce fake face images has made it challenging to distinguish between genuine and fake content.
View Article and Find Full Text PDFCommun Biol
June 2024
Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
Deepfakes are viral ingredients of digital environments, and they can trick human cognition into misperceiving the fake as real. Here, we test the neurocognitive sensitivity of 25 participants to accept or reject person identities as recreated in audio deepfakes. We generate high-quality voice identity clones from natural speakers by using advanced deepfake technologies.
View Article and Find Full Text PDFSensors (Basel)
April 2024
Institute of Safety and Security Research, University of Applied Sciences Bonn-Rhein-Sieg, Grantham-Allee 20, 53757 Sankt Augustin, Germany.
Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!