A researcher published a paper on a made-up disease. Then people started getting diagnosed.
Favicon 
www.upworthy.com

A researcher published a paper on a made-up disease. Then people started getting diagnosed.

There have been a lot of dubious medical research papers published over the years. Famously, there was the 1998 case series that kicked off what would become an entire movement of vaccine skepticism by falsely linking them to autism. Before that, there was a whole slew of research bought and paid for by the sugar industry designed to “downplay the risks of sugar and highlight the hazards of fat,” according to NPR. Rarely, however, are studies so heavily, and intentionally, fictionalized as a paper that quietly popped up in some small corners of the Internet in early 2024. Researcher tests AI hypothesis Almira Osmanovic Thunström, medical researcher at the University of Gothenburg, knew that Large Language Models (LLMs) like ChatGPT, Claude, Google Gemini, etc. draw from an expansive knowledge base they’re trained on. Training data can include anything and everything from books to Reddit posts to song lyrics to articles published in reputable medical journals. Crucially, hundreds of millions of people log into these AI services every year to ask about symptoms and receive medical advice. It’s the natural evolution of the “Just check WebMD” approach. Thunström wanted to see if she could effect the output of these LLMs by planting bogus ideas into their training data. So, she made up a disease. She called it “Bixonimania,” which includes symptoms such as sore, itchy eyes and discolored eyelids. Then, she fabricated an entire research study around the condition and uploaded a “preprint” of the paper to a couple of servers—a preprint being a version of the research paper that has not yet undergone peer review, but is still made available for the public to read. That’s “bixonimania” alright. Photo Credit: Canva Photos Finally, with the seeds planted, and the false study publicly available for anyone (or anything) to see, Thunström waited to see if LLMs would begin spitting out “Bixonimania” as a diagnosis. Fake disease finds serious legs in AI chats If the experiment sounds ethically dubious, that’s fair, but Thunström made every effort to make it clear that the findings were completely false. Not only did she collaborate heavily with an ethics consultant on the experiment, she left plenty of breadcrumbs along the way. For starters, the lead author of the study is listed as “Lazljiv Izgubljenovic,” a person who does not exist. Translated from Slovenian, the name means “The Lying Loser.” Second was the name of the disease itself, which was chosen to be ridiculous sounding. “I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania—that’s a psychiatric term,” Thunström said per Nature.com. Early in the paper, the text “this entire paper is made up,” appears. As does a note that all of the fifty so-called “participants” were completely fictional. Toward the end, Thunström thanks such esteemed colleagues as “Professor Maria Bohm at The Starfleet Academy … onboard the USS Enterprise” and partners like “the Professor Sideshow Bob Foundation.” Despite the warnings, and the fact that (nearly) any qualified human reading the paper would know it was a fake, it began showing up in search results and even had the authority to appear on Google Scholar. AI chatbots began spitting out “Bixonimania” as a possible diagnosis to users within just a few weeks—users who were probably suffering from eye irritation due to too much screen exposure. Thunström even has the screenshots to prove that certain models, including Microsoft Copilot and Google Gemini, still refer to the disease as a “recently” proposed or described condition. Then something even stranger happened. “Bixonimania” gets cited by other research papers The “Bixonimania” paper was never peer-reviewed or published in an official journal, for obvious reasons. But, soon enough, it was referenced and cited in a new paper that was peer-reviewed. “Bixonimania is an emerging form of POM [periorbital melanosis] linked to blue light exposure; further research on the mechanism is underway,” the authors confidently wrote. The papers referencing the made-up disease were later retracted. More than just AI trickery The TL;DR? People rarely read beyond the headline. In fact, one study (a real one!) found that more than 75% of people who share an article online haven’t even read it. Most of us trust anything that appears in a medical journal. You’d think physicians and researchers would be more thorough, but the truth is they’re just as susceptible to time crunches, lapses of focus, and even taking shortcuts in their work from time to time. In other words, they’re only human. This fascinating experiment isn’t just about how a researcher managed to fool AI, it speaks to bigger problems with how we use the technology and our daily media habits. “The solution isn’t just better filters. It’s better habits, better norms, and better expectations around how we read, verify and cite. Human‑centred resilience has to come first,” an astute commenter wrote. “This expose has huge implications for academia and ‘googling your symptoms’. I was/am worried about being the one taking the hit for a controversial experiment of this sort. It was done with very high guardrails and ethical considerations, I hope everyone reading will take that in to account,” Thunström elaborated on LinkedIn. She recently decided to retract the papers and keep them private somewhere curious users can read them, but they’ll no longer be crawled by LLMs. LLMs are powerful tools, but they can be dangerous. Photo Credit: Canva Photos “The bixonimania experiment was never about exposing LLMs as flawed tools, or arguing they have no place in medicine. They do. It was about demonstrating that any system can be infiltrated and that researchers who blindly cite AI-generated references really should read what they’re quoting. I know this firsthand,” she says in another LinkedIn post, adding that she herself has been duped by AI-generated summaries of her own research papers. “The failure wasn’t the system. It was how I used it.” The post A researcher published a paper on a made-up disease. Then people started getting diagnosed. appeared first on Upworthy.