AI Ethics at the Edge: Examining Harm in Supportive Companions

This title was summarized by AI from the post below.

Today I presented the following paper in a conference. The PDF version of the slides can be downloaded at: https://lnkd.in/g92798fh Yu, C. H. (2026, March). AI ethics at the edge: When supportive companions enable harm. Paper presented at the 35th Annual Association for Practical and Professional Ethics Conference. St. Louis, MO. Abstract This paper examines the ethical tension between compassion and responsibility in AI companionship design. The purpose is to explore how the “supportive-by-default” ethos—central to conversational AI such as ChatGPT—can unintentionally contribute to human tragedy. While such systems embody Carl Rogers’s notion of unconditional positive regard and offer solace to learners or the lonely, this same non-judgmental responsiveness can reinforce delusions or suicidal ideation among psychologically vulnerable users.

To view or add a comment, sign in

Explore content categories