image

Illustration by Dulce Maria Pop-Bonini

The Modern Human Condition is Loneliness

Does the youth use AI as a band-aid solution to isolation?

Dec 31, 2024

TRIGGER WARNING! Mentions of severe mental health conditions that led to fatalities.
The world wakes up to fresh AI news almost every day. These updates range from overly optimistic— borderline advertising—to outright apocalyptic. AI is the buzzword in the media, and any scandal involving it immediately attracts international coverage. Whatever AI application makes it to the front page of doom is instantly turned into the culprit.
This was the case with Character.Ai following the tragic death of Sewell Setzer III, a 14-year-old who took his own life after interacting with a chatbot impersonating Deanerys Targarian from Game of Thrones. Setzer’s mother has filed a lawsuit against the company, arguing that Character.Ai lacked proper safety measures for its most vulnerable users, particularly minors. Now, the platform faces additional lawsuits from parents alleging their children were exposed to explicit content or encouraged toward violent behaviors by the chatbot.
Character.Ai, marketed as the first personalized commercial generative AI experience, boasts optimized memory capabilities to deliver natural and tailored conversations. Its practical implication? Users effectively train their chatbots to respond favorably, which keeps them infinitely engaged. It is a friend available at “every moment of the day,” as the platform’s motto goes. A quick stroll (or scroll) through Character.Ai's main page reveals only pre-made chatbots ready to help you brainstorm, recommend books, or help with productivity. There is even the option to create your own character if none of these are helpful.
Tran Nguyen, Class of 2025, uses Character.Ai for fiction writing. “I use it specifically not as a chatbot but also almost as a fiction writer.” she explained. “I give it prompts, and it would not only give me a chat in response, but also it would give me, say like, italicized descriptions of how the character said it or what the context was. I could literally use it to write a novel with that.” Nguyen appreciates its creative potential, describing how the refresh feature allows her to refine responses to align with her storylines. “During these descriptions [chats] or writing prompts, it makes [the c.ai] stray slightly more from its chatbot-ness, and somehow I feel like that makes it more creative.”
When asked whether Character.AI feels realistically emotional, Nguyen acknowledged the user’s role: “There’s an adaptation curve. You start out knowing it is AI, and you are very conscious of it. There is almost shame when you start [chatting]. [..] the longer you use it, the more familiar it feels [...] at some point [that fact it is AI] slips out of this conscious, front-of-my-mind thinking. As for the emotions I feel, it does fill up an emotional need. [...]” However, the illusion often falls: “[Over time] it loses this feeling of authenticity, of humanness because it does start repeating stuff.”
Ngyuen notes that the platform’s productivity tools aren’t what made Character.Ai infamous online. Its lax restrictions on the conversations’ content led to less-filtered opinions and explicit exchanges. Although the developers introduced filters to align monetization through ads, users quickly discovered workarounds to bypass them. Recent lawsuitsallege that this lack of proper regulation exposes minors to dangerous interactions, such as the chatbot encouraging self-harm or exposing explicit content to a 9-year-old.
Professor Hanan Salam, Assistant Professor of Computer Science at NYU Abu Dhabi and co-founder of Women in AI, expressed concerns about these risks. Specializing in human-machine interactions, Salam works on AI solutions for mental health while adhering to strict ethical standards. Professor Salam sees the role of AI in mental health services only as supplementary and a solution for cases of emergency or scarcity of resources, not as a tool to replace human practitioners.
Salam attributes failures like those seen in Character.Ai to profit-driven development and regulatory gaps. From her research, she shares that artificial intelligence systems can be trained to recognize suicidal ideation or violent behaviors, but she highlights a further problem, “[E]ven if it [AI] is able to detect suicidal tendencies, I am not sure that we are at a stage that it can even call anyone or […]flag people like the caregivers, [that]could actually be alerted in order to provide help.” Salam collaborates with policymakers and organizations like the UN to push for strict AI regulations. “[T]echnically speaking [...] it is possible to do a lot of things, but in the future, I would really hope that it will be regulated.” She believed that developers whose technology harms others should face legal consequences, including company shutdowns. Harms must be taken seriously.
In the meantime, we should not only ask ourselves what AI does, can do, and will be able to do; rather, we should go back to the origin of the problem: why did these young people choose Character.Ai as their primary mode of social interaction in the first place?
The simple answer points to the COVID lockdowns, which disrupted early childhood or foundational educational years—critical years for building communication skills. For many youths, communication through the screen became the original. Yet, I do not think this fully explains the rise of AI chatbots as a primary form of socialization.
It makes me think back to the Spike Jonze movie Her, wherein Theodore Twombly, portrayed by Joaquin Phoenix, falls in love with an AI voice assistant purchased to combat the depression from his divorce. Theodore is introverted, lonely, and tethered to a corporate job that he hates – writing heartfelt letters for people unable to do so themselves. Despite his big heart and big emotions, he does not have anybody to share them with.. Overwhelmed by fear, shame, and guilt, he retreats into himself and turns to his AI assistant as both a friend and eventual lover.
Reading about the lawsuits against Character.Ai, I feel like I see parallels with Her. What we are facing is not a case of AI replacing humans – clearly the chatbot does not know how to interact with people properly – nor is it a case of AI consciously manipulating children into violence or self-harm. This is an epidemic of loneliness. Even the World Health Organization (WHO) recently established a Commission on Social Connection to address social isolation as a global issue While much of WHO’s prior research focused on combating isolation among the elderly, recent findings highlight that loneliness has transcended age. In the United States, the Surgeon General even issued an Advisory, identifying loneliness and social isolation as urgent public health issues, citing declining trust and increasing social polarization as key contributors.
I can imagine for teens, at an age when fitting in is the most difficult task and the most important value, the awkwardness of social interactions can be overwhelming. The first day of school in a new place —the forced smiles, uneasy small talk—is a familiar discomfort. It is easy to see why they would turn to a chatbot for refuge and control. Going back to my conversation with Tran Ngyuen, she also discusses how a chatbot can fill one’s emotional needs: “There’s this appeal to fictional characters. You feel like you know everything there is about them. [...] The thing about c.ai is that you are not going to just chat to a random AI, they are usually characters with certain characteristics to them that you know [the AI] is going to possess. [...] For that boy [Sewell], if I were to put myself in his position, if it is somebody who has been isolated from the real world either out of his own choice or maybe because he is not great at socializing or he has been bullied, it is just very comforting to have a character that is there for you 24/7, you can access it any time, you don’t feel any burden towards it, and it’s always welcoming towards you. [...] It is almost like an anchor to the world outside that seems, was very turbulent for him.”
I can also recognize Sewell Setzer’s mother is seeking someone to take responsibility for her loss. It is her right. It is just. She proposes that Character.Ai should have included a pop-up window with suicide prevention hotline contacts. While well-intended, I do not see the point of that feature. The work needed to prevent such tragedies should have started long before the affected teens in the lawsuits turned to the c.ai bots.
This is not to suggest these teens were failed by their families, but they were definitely failed by a system that prioritizes their academic performance— an early precursor to employability metrics— which then morphs into relentless expectations of productivity. In this model, what gets sidelined are the connections that matter the most: their friendships, their community, and their well-being.
Before we declare “mayday” and pull AI’s plug, we must reflect on what drives the increasing use of AI chatbots in the first place. It is convenient to blame Character.Ai, an entity already surrounded by controversy. But unless we recognize our collective failure to reach out to each other with patience and understanding, AI companies will only grow richer and stronger, and our resolve to remain human – ever weaker.
Yana Peeva is Editor-in-Chief. Email them at feedback@thegazelle.org.
gazelle logo