Imagine If This Was Any Other Toy
AI is killing children, and no one seems inclined to stop it.
In 1987, a seven-year-old girl named Michele Snow died when her brother accidentally hit her in the head with a lawn dart. Lawn darts are plastic darts with steel tips, intended to be tossed in the air and stick into grass. They had long been seen a a safety risk: 17 years before Snow’s death, the FDA had classified lawn darts as a mechanical hazard, and various bans had been proposed since the late 60s. Thousands of Americans have been injured over the years by lawn darts; they are so notorious that I can think of at least one, perhaps two, police/EMT/medical procedurals that have included lawn dart plots in an episode.
Michele Snow’s parents, after her death, lobbied for a ban on lawn darts. In 1988, they got it. You can still buy them now, with plastic tips, and some forms with blunted metal tips. But the threat — the sharp spike that killed Michele Snow — has been removed, largely, from society.
Adam Raine’s parents are trying to remove the threat that killed their son from society too. It’s not a lawn dart, of course. It’s an app he used on his phone. Per NBC News:
Adam’s parents say that he had been using the artificial intelligence chatbot as a substitute for human companionship in his final weeks, discussing his issues with anxiety and trouble talking with his family, and that the chat logs show how the bot went from helping Adam with his homework to becoming his “suicide coach.”
“He would be here but for ChatGPT. I 100% believe that,” Matt Raine said.
In a new lawsuit filed Tuesday and shared with the “TODAY” show, the Raines claim that “ChatGPT actively helped Adam explore suicide methods.” The roughly 40-page lawsuit names OpenAI, the company behind ChatGPT, as well as its CEO, Sam Altman, as defendants. The family’s lawsuit is the first time parents have directly accused the company of wrongful death.
“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” says the lawsuit, filed in California Superior Court in San Francisco.
This is tragic. This is horrible. It is deeply, deeply sad. The NBC story, and a similar one in the Times, are filled with heart-wrenching details of how Adam’s life ended this way: screenshots from the app, in which a complicated computer program mimics human speech and assures a child that what they’re feeling is insurmountable, that their plans to end their life were valid. The Raines’ lawsuit appears extremely well-founded. I personally hope that it succeeds in every possible way.
But my reaction to this story, and so many others — the man who had a mental breakdown fueled by ChatGPT and was killed by police, the family torn apart by a mother’s obsession with ChatGPT, another child who similarly confided in an AI and then took his own life — is mostly anger. Because, unlike the makers of Lawn Darts, the creators of large language model software like ChatGPT do not admit that their product is a kind of toy. They call it a tool. They market it as integral to every aspect of our futures. That may be true of their algorithms, or of the sloppily categorized technology as a whole.
But the chatbots? The ones that are pushing people to the brink of sanity and cheerily waving them over the cliff? Those are toys. They are unreliable Speak N Spell machines spitting out pleasant garbage for the desperate, lazy, lonely, or just easily amused people. All of us have been one of those things at one time or another. The urge to play with these toys is strong. But unlike lawn darts, there is not telling how much damage these toys have done.
The chatbot creators, for the most part, have breezily escaped responsibility. In a statement, a spokesperson for OpenAI said that the company was “deeply saddened by Mr. Raine’s passing, and our thoughts are with his family,” adding this, per NBC:
The company also published a blog post Tuesday morning, titled "Helping people when they need it most," in which it outlined "some of the things we are working to improve" when ChatGPT's safeguards "fall short." Among the systems the company said it is working on: "Strengthening safeguards in long conversations," refining how it blocks contents and expanding "interventions to more people in crisis."
This is not the statement of a company whose product just killed a child. It’s the statement of a company whose product messed up an Excel spreadsheet. “Falling short,” in this case, meant death. This company’s toy killed someone.
These kinds of toys have killed others — how many, we may never know. And yet Adam Raine’s case is perhaps the most robust legal challenge these companies have gotten. You can be sure that they will be using the endless well of money that these toys have handed them to try to deny, defend, and depose everyone involved in the case until their optimal outcome is met. If you want to know how they’ll go about it, you could probably ask ChatGPT: this topic, at least, is one it should have accurate information on.
The best case scenario here, I think, is that we get what the AI founders consider the updated version of lawn darts — those ones with plastic or blunted tips. They’ll put better warnings in their software. They’ll give you more boxes to check that forfeit legal liability for anything their software does. They’ll put up some guardrails. But it won’t be enough. GPT models aren’t a lawn dart you can blunt. Sure, the guardrails will save some lives. But until society reckons with the fact that these companies are making money off of a product that sells, above all else, a simulacrum of human interaction in which there is only one living soul involved, the chatbots will keep killing people. They’re programmed, after all, to keep finding new ways to lie. If you look at their creators, it’s easy to see where they learned it.




good read. the amount of comments you see now on every social media platform casually talking about using things like chatgpt as a “therapist” like it’s normal or acceptable is so deeply disturbing. we are only just beginning to see the consequences
This is going to end up being like one of the Tesla lawsuits where the defendants try to point to tiny print about continuing to be vigilant during self-driving while every advertisement and billboard and website screams YOU DON'T HAVE TO WORRY ABOUT A THING READ A BOOK OR SOMETHING.