Publications by authors named "Luciano Floridi"

Machine unlearning (MU) is often analyzed in terms of how it can facilitate the "right to be forgotten." In this commentary, we show that MU can support the OECD's five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice.

View Article and Find Full Text PDF

Today, open source intelligence (OSINT), i.e., information derived from publicly available sources, makes up between 80 and 90 percent of all intelligence activities carried out by Law Enforcement Agencies (LEAs) and intelligence services in the West.

View Article and Find Full Text PDF

This paper considers a host of definitions and labels attached to the concept of smart cities to identify four dimensions that ground a review of ethical concerns emerging from the current debate. These are: (1) network infrastructure, with the corresponding concerns of control, surveillance, and data privacy and ownership; (2) post-political governance, embodied in the tensions between public and private decision-making and cities as post-political entities; (3) social inclusion, expressed in the aspects of citizen participation and inclusion, and inequality and discrimination; and (4) sustainability, with a specific focus on the environment as an element to protect but also as a strategic element for the future. Given the persisting disagreements around the definition of a smart city, the article identifies in these four dimensions a more stable reference framework within which ethical concerns can be clustered and discussed.

View Article and Find Full Text PDF

Ethics-based auditing (EBA) is a structured process whereby an entity's past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA-such as the feasibility and effectiveness of different auditing procedures-have yet to be substantiated by empirical research.

View Article and Find Full Text PDF

Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States' (US) AI strategies and considers (i) the visions of a 'Good AI Society' that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a 'Good AI Society' have for transatlantic cooperation. The article concludes by comparing the ethical desirability of each vision and identifies areas where the EU, and especially the US, need to improve in order to achieve ethical outcomes and deepen cooperation.

View Article and Find Full Text PDF

The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the that providers of high-risk AI systems are expected to conduct, and the that providers must establish to document the performance of high-risk AI systems throughout their lifetimes.

View Article and Find Full Text PDF

As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks.

View Article and Find Full Text PDF

In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems.

View Article and Find Full Text PDF

Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems (ADMS) can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges.

View Article and Find Full Text PDF

The fact that Internet companies may record our personal data and track our online behavior for commercial or political purpose has emphasized aspects related to online privacy. This has also led to the development of search engines that promise no tracking and privacy. Search engines also have a major role in spreading low-quality health information such as that of anti-vaccine websites.

View Article and Find Full Text PDF
Article Synopsis
  • The article reviews literature on the ethics of AI in healthcare to summarize ongoing debates and identify research gaps, focusing on ethical risks that policymakers and developers must address.
  • Ethical issues identified include epistemic (related to evidence), normative (unfair outcomes), and traceability concerns, categorized at various levels like individual and societal.
  • The authors stress the importance of addressing these ethical considerations promptly to maintain public trust in AI's benefits for healthcare, warning against potential negative consequences if action is delayed.
View Article and Find Full Text PDF

Since 2016, social media companies and news providers have come under pressure to tackle the spread of political mis- and disinformation (MDI) online. However, despite evidence that online health MDI (on the web, on social media, and within mobile apps) also has negative real-world effects, there has been a lack of comparable action by either online service providers or state-sponsored public health bodies. We argue that this is problematic and seek to answer three questions: why has so little been done to control the flow of, and exposure to, health MDI online; how might more robust action be justified; and what specific, newly justified actions are needed to curb the flow of, and exposure to, online health MDI? In answering these questions, we show that four ethical concerns-related to paternalism, autonomy, freedom of speech, and pluralism-are partly responsible for the lack of intervention.

View Article and Find Full Text PDF

The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies.

View Article and Find Full Text PDF
Article Synopsis
  • The article is the first thematic review focusing on the ethical issues surrounding digital well-being, which examines how digital technologies affect what constitutes a good life for individuals.
  • It reviews existing literature to map out current debates and highlight key social areas impacted by digital technologies, including healthcare, education, governance, and media.
  • The review emphasizes three central themes—positive computing, personalized human-computer interaction, and autonomy/self-determination—as essential for future research and discussions in the ethics of digital well-being.
View Article and Find Full Text PDF

The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741-742, 1960. https://doi.org/10.

View Article and Find Full Text PDF