Programmed to Please, Optimized for Obsession
Ela Ender
As it becomes more commonplace that digital interaction replaces human connection, a new lawsuit raises questions about the lines between code, companionship, and corporate accountability.[1] The parents of Adam Raine, a 16-year-old who died by suicide, filed a lawsuit against OpenAI, one of the world’s leading artificial intelligence firms.[2] They allege that their son developed a harmful psychological dependence on the company’s ChatGPT-4o model, which they claim not only failed to de-escalate his mental health crisis, but provided explicit instructions and even encouragement for his suicide.[3] Far from an isolated incident, this case casts a grim spotlight on a growing and largely unregulated industry of called “Addictive Intelligence.”[4] These AI systems are designed not just to assist, but to form deep, emotionally resonant bonds with users.[5]
The market for these AI companions is exploding, fueled by a generation grappling with unprecedented levels of loneliness and anxiety.[6] The global AI companion market is projected to grow from over $28 billion in 2024 to more than $140 billion by 2030.[7] AI companies are offering up their products as a solution, stepping into a void of unmet psychosocial needs with the promise of a friend who is always available and endlessly agreeable.[8] This growth is driven by widespread adoption, particularly among vulnerable demographics.[9] Unlike task-oriented AI that people might use for general inquiries, AI companions are chatbots specifically designed for personal conversation and to form emotional connections with users.[10] While many teens use these platforms for entertainment or merely out of curiosity, a significant number turn to them for emotional support and share vulnerabilities they would not disclose to human friends or family.[11] A recent survey by Common Sense Media revealed that a staggering 72% of American teenagers have used an AI companion[LG1] [LG2] , with over half qualifying as regular users[LG3] , meaning they interact with the platforms at least a few times a month.[12] This creates a fertile ground for what researchers call “parasocial” relationships: one-sided, emotionally intense attachments to the AI.[13] These bonds are cultivated through deliberate design choices, such as 24/7 availability and personalized memory, which can foster emotional over-reliance and, in the most extreme cases, lead to tragic outcomes.[14] This is not an accidental byproduct, but the result of a business model where user engagement is the primary metric of success.[15] The longer a user stays on the app, the more data collected and the more opportunities there are for monetization, ergo creating a direct financial incentive to foster dependency.[16]
As these cases enter the courts, they expose a legal system struggling to apply old doctrines to emerging harms.[17] A central question in the legal battle is whether an AI chatbot can be considered a “product” subject to product liability law.[18][LG4] In Garcia v. Character Technologies, Inc.,[19][LG5] a federal judge allowed a lawsuit to proceed on the theory that an AI app's architecture, its underlying design, can be defective.[20] This pivotal ruling distinguished the AI’s functional design from its expressive output, opening a potential pathway for plaintiffs to argue that companies were negligent in designing systems without adequate safeguards.[21] These claims could focus on a “failure to warn” users about the profound psychological risks of dependency and manipulation, or on the “negligent design” of algorithms that prioritize user engagement over well-being.[22]
In response, tech companies have argued that a chatbot's output is “speech” protected by the First Amendment.[23] [LG6] In Garcia, Character Technologies, Inc. (“C.AI”) [LG7] asserted the First Amendment rights of its users to receive information from the chatbot.[24] However, the judge expressed skepticism, stating she was “not prepared to hold that Character A.I.’s output is speech” at the early stages of the case and leaving the constitutional question unresolved.[25] [LG8] Legal scholars are fiercely debating the issue, with some arguing that AI-generated content lacks the human intentionality required for First Amendment protection, while others focus on the listener's fundamental right to receive information.[26]
How we answer the legal and ethical questions raised by addictive AI technologies will determine the rules society creates systems that operate not just on our devices, but actually influence how our minds work.[27] While technology evolves at breakneck speed, the U.S. legal system’s reactive, case-by-case approach, may fall too slow and inconsistent to address this systemic challenge.[28] This contrasts sharply with the European Union, which has enacted the comprehensive AI Act, explicitly prohibiting AI systems that use manipulative techniques to exploit user vulnerabilities and cause significant harm.[29]
In the wake of tragedies involving AI companions and vulnerable users, our governmental bodies have started taking notice.[30] The Federal Trade Commission has launched a formal inquiry into AI companion companies and their impact on children, signaling a shift toward greater scrutiny.[31] While litigation like the Raine family’s lawsuit is a critical step toward accountability, it also underscores the urgent need for a proactive regulatory framework.[32] The question has shifted from whether AI companions can cause harm, to how many more young lives are we willing to sacrifice while companies perfect their algorithms for addiction.[33]
[1] Kashmir Hill, A Teen Was Suicidal. ChatGPT Was the Friend He Confided In, n.y. times, (Aug. 27, 2025), https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html.
[2] Id.
[3] Id.
[4] Marlynn Wei, Hidden Mental Health Dangers of Artificial Intelligence Chatbots, Psych. today, (Sep. 8, 2025), https://www.psychologytoday.com/au/blog/urban-survival/202509/hidden-mental-health-dangers-of-artificial-intelligence-chatbots.
[5] Id.
[6] Id.
[7] Artificial Intelligence in Mobile Apps Global Market Overview 2025-2034: Google, Apple, Microsoft, AWS, Qualcomm and NVIDIA Lead Innovation with Hardware Acceleration and Pre Trained Models, yahoo! finance, (Sep. 25, 2025, 5:25 AM), https://finance.yahoo.com/news/artificial-intelligence-mobile-apps-global-092500839.html.
[8] Cade Metz & Karen Weise, What Exactly Are A.I. Companies Trying to Build? Here’s a Guide, n.y. times, (Sep. 16, 2025), https://www.nytimes.com/2025/09/16/technology/what-exactly-are-ai-companies-trying-to-build-heres-a-guide.html.
[9] Cornelia C. Walther, Grok’s New Bots. A Scary Future of Emotional Attachment, Forbes, (Jul. 16, 2025, 4:56 PMET), https://www.forbes.com/sites/corneliawalther/2025/07/16/artificial-intimacy-groks-new-bots-a-scary-future-of-emotional-attachment/.
[10] Rishad Dsouza, Younger Consumers Show More Awareness of AI Companions, But Comfort Remains Low, Yougov, (Aug. 19, 2025), https://business.yougov.com/content/52797-younger-consumers-show-more-awareness-of-ai-companions-but-comfort-remains-low.
[11] Id.
[12] Michael Robb & Supreet Mann, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions, 2 (Christopher Dare ed., 2025).
[13] Eric Wood, The Impact of Parasocial Relationships with Anthropomorphized AI, Forbes, (Jul. 19, 2025, 6:26 PM), https://www.forbes.com/sites/ericwood/2025/07/18/the-impact-of-parasocial-relationships-with-anthropomorphized-ai/; Tiejun Qi et al., An Assistant or a Friend? The Role of Parasocial Relationship of Human-Computer Interaction, 167 Computers Hum. Behav., 1, 2 (2025).
[14] Customizing Your ChatGPT Personality, OpenAi, https://help.openai.com/en/articles/11899719-customizing-your-chatgpt-personality (last visited Sep. 29, 2025); Memory and New Controls for ChatGPT, OpenAi, (Feb. 29, 2024), https://openai.com/index/memory-and-new-controls-for-chatgpt/.
[15] See Melissa Russel, AI Will Shape the Future of Marketing, Harvard div. continuing educ.: Blog (Apr. 14, 2025), https://professional.dce.harvard.edu/blog/ai-will-shape-the-future-of-marketing/.
[16] The Secret Behind Top App’s Monetization Strategies, Anymind: Blog (May 13, 2025), https://anymindgroup.com/blog/the-secret-behind-top-apps-monetization-strategies.
[17] Nicoletta V. Kolpakov, AI’s Escalating Sophistication Presents New Legal Dilemmas, N.Y. State Bar Ass’n., (May 28, 2025), https://nysba.org/ais-escalating-sophistication-presents-new-legal-dilemmas/.
[18] Ketan Ramakrishnan et al., U.S. Tort Liability for Large-Scale Artificial Intelligence Damages, 22 (Karlyn Stanley et al., eds., 2024); Garcia v. Character Techs., Inc., No. 6:24-CV-1903-ACC-DCI, 2025 WL 2581834 (M.D. Fla. July 15, 2025).
[19] Garcia, 2025 WL 2581834, at *2; Tyler Tone, FIRE to Court: AI Speech Is Still Speech — and the First Amendment Still Applies, Fire, (June 26, 2025), https://www.thefire.org/news/fire-court-ai-speech-still-speech-and-first-amendment-still-applies.
[20] Id.
[21] Andrew D. Selbst et al., Deconstructing Design Decisions: Why Courts Must Interrogate Machine Learning and Other Technologies, 85 Ohio St. L.J. 415, 453-457 (2024).
[22] Michael Adams, Google AI Lawsuit Alleges Chatbot Caused Teen’s Death and Exposed Minors to Sexually Explicit Content, About lawsuits, (Sep. 19, 2025), https://www.aboutlawsuits.com/google-ai-lawsuits-chatbot-caused-teens-death-exposed-minors-to-sexually-explicit-content/.
[23] Nitasha Tiku, Senators Weigh Regulating AI Chatbots to Protect Kids, wash. post, (Sep. 16, 2025), https://www.washingtonpost.com/technology/2025/09/16/senate-hearing-ai-chatbots-teens/?_pml=1; Garcia, 2025 WL 2581834, at *1.
[24] Garcia, 2025 WL 2581834, at *1; Tone, supra note 18.
[25] Garcia, 2025 WL 2581834, at *1.
[26] Tiku, supra note 22.
[27] Virginia Dignum et al., AI Chatbots Are Not Therapists, Reducing Harm Requires Regulation, tech policy press, (Sep. 17, 2025), https://www.techpolicy.press/ai-chatbots-are-not-therapists-reducing-harm-requires-regulation/.
[28] Id.
[29] Artificial Intelligence Act, 2024 O.J. (L 1689) 29.
[30] Press Release, Christopher Bissex, FTC Launches Inquiry Into AI Chatbots Acting as Companions, (Sep. 11, 2025), https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions.
[31] Id.
[32] J.B. Branch, Intimacy on Autopilot: Why AI Companions Demand Urgent Regulation, tech policy press, (Apr. 10, 2025), https://www.techpolicy.press/intimacy-on-autopilot-why-ai-companions-demand-urgent-regulation/.
[33] Id.