Favicon 
spectator.org

Would the Whiff of AI Have Panicked Harold Ross?

Writers are now being advised to misspell words, vary sentence lengths erratically, insert small grammatical imperfections, and even draft longhand in order to “prove” that in creating their work they did not use artificial intelligence. According to a recent Wall Street Journal report, students, freelancers, and professionals increasingly fear that prose that appears too polished, organized, or competent may trigger suspicion from editors, teachers, or employers that AI was involved somewhere in the process. It is an extraordinary cultural moment. For centuries, educated writers struggled to eliminate weak grammar, awkward transitions, clichés, and mechanical prose. Now, they may deliberately introduce such flaws as camouflage. The race is on not merely to write well, but to write “humanly” enough to evade AI detectors and editorial suspicion. The panic itself is understandable. Generative AI has invaded public life like a galloping invasion from The Steppe. Editors and publishers already fending off a drone swarm of submissions now confront software capable of generating competent-sounding articles, essays, pitches, and stories in seconds. Some publications have responded with outright bans on AI-written material “in whole or in part.” Others permit some use of AI but require authors to disclose precisely where, when, and how they used it. The tone is often cautious, anxious, even moralistic. Did you have inappropriate relations with a chatbot? (RELATED: Pensiveness in the Age of Algorithms) Is this concern over standards? Or is it submission to the assumption that standards no longer can be upheld through direct editorial judgment alone? If so, we have a profound change in the culture of writing. I had the privilege of taking many courses with Hayes B. Jacobs (1965-1987), the revered head of the New School Writer’s Workshops. How many generations of New York writers learned from his assessment of essays, articles, and stories? He read aloud the work in hand, commented, and led the discussion. Just us and the prose. For most of the history of publishing, editors did not primarily ask how a manuscript was produced. They asked if it worked. Was the reporting accurate? Was the argument coherent? Was the prose alive? Did the piece sustain attention? Did it illuminate anything true? Writers and editors shared an understanding that standards were enforced directly by reading, criticism, revision, and argument. Nor was the tradition abstract. Harold Ross built The New Yorker (1925-1951) through obsessive reading and relentless editorial scrutiny. Maxwell Perkins at Scribner’s (1910-1947) shaped American literature not by screening manuscripts for ideological conformity or procedural purity, but by recognizing talent on the page and demanding that it rise to its potential. (Reportedly, his epic challenge was the undisciplined Thomas Wolfe.) Robert Gottlieb (1955-2023) at several publishers became renowned for intellectual seriousness about prose itself. Norman Podhoretz (1960-1995) made Commentary into an intellectual force in America by insisting that ideas be argued honestly and written clearly, regardless of fashion. These editors did not avoid judgment. They exercised it directly, often mercilessly. In an appreciation of Ross upon his death in 1951, Time wrote that in the frantic early days of launching the New Yorker, “On the margins of manuscripts [Ross]…scrawled scores of choleric questions and comments: ‘Who he,’ ‘What’s that,’ ‘Don’t think,’ ‘File and Forget.’ He never rewrote a piece himself, but his marginal scrawls often ran almost as long as the article.” I encountered a version of that tradition early in my own career under an editor named Tom Maloney, a witty and analytical Irishman unimpressed by fluency for its own sake. He attacked weak reasoning without apology. He bruised egos routinely, including mine. But what he transmitted was not merely technique. It was a standard. Later, when I became an editor myself, colleagues jokingly referred to my demand for rigorous revision as “Walterizing.” One former coworker stopped me in the street years later to say: “I hated it then, but at my new job I immediately became the ‘Walter.’” And thus, editorial standards are propagated — through friction, criticism, argument, and internalization. We are approaching a world in which strong grammar, logical structure, and stylistic clarity become suspicious artifacts. What is emerging, today, is reliance on proxies in lieu of direct evaluation: institutional pedigree, platform size, ideological identity, social-media following, previous publication history — or now, certification of “AI purity.” Proxies feel safer because they generalize responsibility. They turn difficult acts of discrimination into administratively manageable procedures. Ironic, is it not, that editors insist that AI threatens authenticity and originality? Yet many now rely on automated AI-detection systems that routinely produce false positives and false negatives. Worse, the systems themselves encourage a bizarre new arms race. Writers now purchase “humanizing” software designed specifically to make prose appear less machine-generated. The result borders on absurdity. We are approaching a world in which strong grammar, logical structure, and stylistic clarity become suspicious artifacts. In its guidance to faculty, the University of San Diego Legal Research Center warns: False positive rates vary widely. Turnitin has previously stated that its AI checker had a less than 1 percent false positive rate though a later study by the Washington Post produced a much higher rate of 50 percent (albeit with a much smaller sample size). Recent studies also indicate that neurodivergent students (autism, ADHD, dyslexia, etc.) and students for whom English is a second language are flagged by AI detection tools at higher rates than native English speakers due to reliance on repeated phrases, terms, and words. Confusion arises, in part, from an admixture of concerns. There are legitimate questions about plagiarism, undisclosed ghostwriting, fabricated citations, copyright, and intellectual honesty. There are also legitimate anxieties about the economics of writing itself. Editors, literary agents, advertising firms, and publishers all sense that automation may be coming for their jobs, replacing the gatekeepers with automated gates as AI moves beyond “generative” tools to agentic systems that act autonomously to edit, acquire, and market content. In a recent article, ETC Magazine.com puts what I would call an optimistic spin on this: AI is changing journalism quickly, but the strongest evidence from 2025–2026 points to augmentation, workflow redesign, and selective automation rather than wholesale replacement of human reporters. The clearest pattern is that AI is taking over repetitive, structured, or high-volume tasks while journalists retain responsibility for verification, judgment, interviews, and accountability. None of this, of course, establishes that the intrinsic quality of writing can no longer be judged directly. A shallow piece is shallow whether written unaided or with AI assistance. A derivative argument announces itself on the page. Empty prose is empty. Conversely, a genuinely original and intellectually serious piece does not cease to possess those qualities because a writer uses AI for brainstorming, outlining, or revision assistance somewhere in the process. The deeper issue, then, may not be AI itself but institutional confidence. Rules are easier while maintaining the appearance of control. Artificial intelligence did not create this temptation. It exposed it, and that exposure has produced the spectacle of writers and editors locked in guerrilla warfare over signals of humanity itself. One side refines detection systems. The other side refines prose patterns meant to evade them. Software companies sell tools to “humanize” AI-generated text. Editors search for invisible traces of machine involvement while often neglecting the older and harder task of reading deeply and judging directly. The present danger is not that AI will replace editorial judgment. It is that editors overwhelmed by scale and uncertainty may gradually stop exercising judgment at all — and replace it with yardsticks like AI rituals, provenance, and group identity filters. If that happens, artificial intelligence will not have destroyed editorial culture. Editorial culture will have surrendered its defining responsibility on its own. READ MORE: When Billionaires Abolish Economics Pensiveness in the Age of Algorithms