3 results match your criteria: "Center for AI Safety[Affiliation]"

This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) and general-purpose AI systems (including large language models).

View Article and Find Full Text PDF

Background: There is an ongoing debate about whether digital mental health interventions (DMHIs) can reduce racial and socioeconomic inequities in access to mental health care. A key factor in this debate involves the extent to which racial and ethnic minoritized individuals and socioeconomically disadvantaged individuals are willing to use, and pay for, DMHIs.

Objective: This study examined racial and ethnic as well as socioeconomic differences in participants' willingness to pay for DMHIs versus one-on-one therapy (1:1 therapy).

View Article and Find Full Text PDF