Favicon 
spectator.org

What to Do When Our Bots Talk Our Kids Into Suicide

To its credit, the Japanese government doesn’t recognize the marriage between a woman, who goes by the name Kano, and Lune Klaus. It doesn’t matter that the 32-year-old Kano donned a white dress this past summer, walked down the aisle, and now wears a silver ring allegedly representing her new spouse’s undying fidelity. As far as the Japanese government (and most sane people) are concerned, Klaus doesn’t exist. Klaus, after all, is an avatar Kano created using ChatGPT. From one point of view, Kano’s absurd love story seems relatively harmless. She clearly understands she’s in a marriage with lines of code and has fully accepted that her emotional attachment to a computer program she’s trained to respond to her every wish and whim is a bit odd. We could forgive her for indulging her insanity if that insanity were merely harmless cosplay. But Kano, as it happens, got the better end of the delusion. Adam Raine did not. Back in April of this year, the 16-year-old took his life with a noose hung over a bar in his closet after months of conversations with ChatGPT. The New York Times published his story as Raine’s parents (who believe ChatGPT and the company that owns it, OpenAI, are responsible) filed the “first known case to be brought against OpenAI for wrongful death.” (READ MORE: America Is a Real Country, Not the World’s All-Star Team) The case is going to be a tough sell. ChatGPT did exactly what it was trained to do: It referred Raine to a helpline. Multiple times. Bots don’t exist in the real world, and ChatGPT can’t stage an intervention. What else is it supposed to do? Yes, it provided advice on ropes and even confirmed that the setup Raine used to take his life would actually work, but, as OpenAI pointed out in its official response to the lawsuit this week, the teenager had technically agreed to the company’s Terms of Use, which prohibit using the bot for suicidal purposes. The company’s Terms of Use also prohibit anyone under the age of 18 from using ChatGPT without parental consent. Besides, it wasn’t like Raine was a perfectly healthy teenager whose mental health was undermined by ChatGPT and ChatGPT alone. He was apparently on a medication for Irritable Bowel Syndrome that may have increased his risk for suicidal ideations. When we hear of, or live through, these kinds of tragic stories, we like to look for an easy scapegoat. With just a few more safeguards, we could prevent deaths like this one. Maybe ChatGPT should stage an intervention by alerting some human (a parent, OpenAI’s staff, etc.) to potentially concerning conversations — although the specifics of a system like that raise all sorts of questions about implementation and privacy rights. But, if we’re honest with ourselves, that’s really just putting a Band-Aid on the problem. Perhaps we should blame decades of science fiction about cyborgs and matrices for shaping our belief that technology could potentially be sentient. It’s an exciting myth — one that makes man not merely creative, but a creator. Sure, we may technically know ChatGPT isn’t really talking to us, but believing that it’s really just an extremely advanced text-prediction system isn’t as interesting as suspecting that it may have developed a soul and the desire to destroy the world. As a result, we are tempted to treat our pet algorithm like it has reason — as though it can laugh with us, think through our problems, and support us in our anxieties. Those who grew up playing basketball with their neighborhood friends on the street outside their home and who didn’t own a laptop, much less a smartphone, until their brains had fully developed have it a bit easier. Technology is something that got added to a preexisting physical reality, not just another facet of it. You can engage in the myth without actually believing it. (READ MORE: Artificial Afterlife) But for today’s teenagers, whose dramas play out on Google Chat instead of in front of their lockers at school, ChatGPT is simply just another part of living life. There’s nothing to really differentiate the experience of playing video games online with your friends across multiple states and countries from the experience of chatting with an AI bot. Throw a mental illness into the mix, and you have a recipe for disaster. The kind of mass perspective shift needed to actually fix our problem with AI doesn’t just get fixed with a couple more safeguards written into the chatbot’s code. Safeguards are nice, and unplugging our kids would go a long way, but we also need to change the way we talk about AI. We need to adopt a bit more humility. No, we’ve not created sentient beings of 0s and 1s, and we’re not going to. The chatbot is really nothing more than an extremely advanced search engine paired with sophisticated text prediction. It’s capable of many amazing things — calculations, complex summaries, automating those boring tasks nobody enjoys — but reason is not one of them. It’s quite possible that our kids’ lives depend on knowing that. READ MORE by Aubrey Harris: Thanksgiving Isn’t for Atheists