Writers, editors, professors, and general know-it-alls have been acting like amateur detectives since the arrival of ChatGPT, convinced they can spot AI-generated content through what amounts to a literary palm reading. The supposed telltale signs include everything from em dash usage—heaven forbid—to clean grammar and well-structured conclusions. If a piece of writing is just too good, it must be written by the robots.
Do regular readers care if content is AI-generated? It’s certainly not clear that Google cares. Their SEO Starter Guide doesn’t explicitly mention that AI-generated content is bad for SEO and website performance. What matters, Google says—as they have always said—is that the content is useful, by which they mean it’s based in “Experience, Expertise, Authoritativeness, and Trustworthiness.” (Why they say “authoritativeness” instead of “authority” is anyone’s guess.)
Professional writers—those champions of proper punctuation and clear structure—now viewing these same elements with suspicion is a delicious irony. We’ve side-stepped into an Idiocracy timeline where good writing is considered evidence of artificial origin.
The Em Dash Debate
Consider my beloved em dash, a versatile piece of punctuation I love almost as much as the Oxford (or serial) comma, which has somehow become Exhibit A in the case against AI-authored content. Never mind that the em dash has been a staple of professional writing for centuries. A New York Times article discussing its divisive nature—a few years before ChatGPT was even a twinkle within OpenAI—described the em dash as “the bad boy, or cool girl, of punctuation. A freewheeling scofflaw. A rebel without a clause.”
I love the em dash the way a heart surgeon loves stents—both create essential breathing room in their respective vessels, preventing blockages while supporting the natural carrying-forth of life-giving elements—but unfortunately, the AI hysteria doesn’t stop at punctuation.
The Supposed “Tells” of AI Writing
Writers—who generally love to obsess about and hyper-categorize anything that might threaten their calling—have developed folklore around the endeavor of identifying machine-generated text and often identify these particular elements as somehow signifying AI’s involvement:
- “Too perfect” grammar
- Consistent paragraph lengths
- The presence of clear topic sentences
- Logical flow between ideas
- Comprehensive conclusions that actually conclude
- Bulleted lists (like this one!)
In other words, the basic elements of good writing that teachers have been preaching for decades are now viewed as suspicious. As a writer—and reformed editor, English professor, and general know-it-all—I do wonder if the next wave of content creators will advocate for intentionally messy writing just to prove human authorship. How convenient! “Yeah, boss. I made that sentence a little wonky on purpose, so it would seem authentic.”
How Language Learning Models Actually Work
Most writers and readers don’t know much about how language learning models actually work. That’s the more specific term, by the way, though you’ll get more attention—and sometimes simply be clearer—if you mention AI-this or AI-that. LLMs learn best from human-written text, but because the best human-written text hasn’t always been available to them, they’ve instead trained on endless online content.
Some of that training content was clear, well-structured writing because that’s what we humans have traditionally considered good writing and it does actually exist in certain pockets of the internet. And some of that training must have also involved textbooks, essays, articles, blog posts, and online handbooks—like the free Purdue OWL—that explore how to produce good writing. But most of what LLMs have been fed is total garbage because most of what’s online is user-generated content, with social media posts and comments making up a huge percentage of the available words online. We’ve also learned that when AI trains itself, the result is often “gibberish.”
That said, some measure of high-quality content has always been available, particularly in academic journals, professional publications, and curated platforms. In terms of pure volume, these excellent resources make up a relatively small percentage of total online content. But while most LLM training content may not be “good” by traditional literary or academic standards, it usually serves its intended purpose effectively, whether that’s quick communication or emotional expression—such as that time you let a food blogger know a particular recipe was “too salty,” or told the world (or at least Amazon) what you really thought of that expensive dog brush.
The Unreliability of Detection Tools
This obsession with authenticity has spawned an entire industry of detection tools, too, despite their notorious unreliability. These tools—which frequently flag human-written content as AI-generated and vice versa—should not be trusted as the final arbiter of what is or isn’t written by humans. Like dowsing rods, they give a false sense of certainty while operating on fundamentally flawed premises.
I first countered this kind of detection obsession years before coming to Rare Bird, when I taught writing at Ball State. In a demonstration of one plagiarism detection that came built-in to the course management system the university—like every university using Blackboard or a similar CMS—had paid hundreds of thousands of dollars for, we witnessed a professor’s own work put to the test, only for the program to report that the (already published) academic paper was 30% plagiarized. In support of its judgment, the program had highlighted all of the accurately documented quotations included within the essay.
The Uncanny Valley…of Writing
Even more troubling is how this forensic paranoia can affect the creative process for the writers who are obsessed with identifying AI-produced content. A quick glance on LinkedIn or Reddit shows that some content writers and copywriters—there is a difference—admit to intentionally introducing errors or awkward phrasing so the writing appears more human to clients, at least in their earliest drafts.
The concept of sentence-level errors introduced in order to maintain a piece of content’s “humanity” reminds me of the uncanny valley, that unsettling feeling people experience when encountering something that appears almost, but not quite, human. The concept—introduced by robotics professor Masahiro Mori in 1970—describes how our comfort level with human-like entities suddenly plummets when they reach a certain threshold of realism without achieving it completely.
Context Matters
I admit to falling into this hunt-and-nitpick trap in the early days of ChatGPT when I started to notice certain tics of syntax and diction appearing in marketing-related newsletters and LinkedIn posts. The preponderance of certain “tells”—overused and hyperbolic verbs, in particular—is reduced, if not yet eliminated, in newer learning models with upgraded capabilities. Or perhaps the latest LLMs have simply become better at recognizing their own patterns of prediction and made adjustments. Over time, that is what a good writing student would do.
Diction, punctuation, and other common signifiers of AI-generated content can always be edited out later—by humans, to be clear—but to help an LLM generate better written content to begin with, users must learn to craft better prompts from the outset. And that means LLMs best serve those users who are already smart and capable shapers of language.
In one study after another, we’re learning that “participants [are] generally quite bad at discerning between AI-generated and human-authored content,” meaning that both AI detection programs and humans themselves can’t do this accurately and consistently. The distinction may become increasingly meaningless if we all think the point of writing is to communicate quickly and effectively. Would you want to read an AI-generated novel? Likely not. But what about a social post from your favorite donut brand? Or the installation guide for a new coffeemaker, or a brief explainer on how to optimize your YouTube videos? Seems to me the context of content creation—the purpose, the audience, and so on—matters tremendously.
Along those lines, then, Google may not seem to mind AI-created content now, but their algorithms could be made to have a preference for which LLM creates the online content they rank most highly. Tech giants don’t invest billions of dollars in AI platforms for nothing. If you’re a business creating all of your content with ChatGPT—which is primarily driven by Microsoft’s interests now—and you want your online content to perform well in Google searches, this is something you should keep an eye on as it develops.
The Future of Content Creation
Of course, professional writers can edit AI-generated content into something far more compelling and human, though many resist such tasks, noting that it often takes as long as simply writing a piece from scratch. However, LLMs are still valuable tools for marketing writers—from brainstorming angles and generating outlines to researching competitors and crafting varied versions of social media posts, leaving the writer free to focus on higher-value creative and strategic work.
If an LLM can be used to produce writing that doesn’t require the same level of nuanced human expression—think technical documentation, product specifications, basic news reporting, and financial summaries—then “real” writers might be able to better devote their time to more complicated forms of human expression, such as long-form investigative journalism that uncovers hidden truths, deeply novel or memoir writing that helps us understand the human condition, or even restaurant menus that don’t make you feel like an idiot for not knowing what “deconstructed ephemeral foam of locally sourced pretension” means.
As someone who’s spent years—decades, really—championing the importance of good writing, I admit there’s something oddly satisfying about seeing the whole world suddenly obsessed with how content is created. If nothing else, I hope it means that y’all decide to value content, once and for all. Just don’t expect all that bot-written content to magically become great without a skilled human at the helm.
For more than two decades, we’ve helped businesses across industries create compelling, consistent content that drives results.
You Might Also Like:
- Are Buyer Personas a Waste of Time?
- In Memoriam: Above the Fold
- The Power of the Mid-Game Pivot
- The Marketing Partner Puzzle: Selecting the Right Agency
- The Long Game: Effective Marketing Requires Time and Investment
Sign up for Bird Feed, our monthly newsletter, to receive articles like this in your inbox.