Skip to content

How To Spot AI-Generated Content: 16 Subtle Signs

In the dynamic world of marketing, the ability to discern between human and AI-generated content has become a cornerstone of digital literacy. Not only does it enhance our understanding of the content we consume, but it also allows us to navigate the complexities of the ever-evolving digital landscape. 

If that intro had you raising an AI-suspecting eyebrow, congrats – you’ve already developed a natural radar for AI-generated content. If not, don’t stress. Below, you’ll find all the details you need to calibrate your AI radar. Once you’ve reached the end of the article, circle back and see how many AI-generated red flags you can spot in the intro. 

Why Should We Care If Something Was Written By AI?

Fair question. 

To borrow a word chatbots (and marketers) love to overuse, AI has revolutionized content creation, making it faster, more efficient, and more accessible than ever before. As an editor, I’ve seen it transform people’s writing for the better, helping them eliminate grammatical errors and tidy up their prose. However, I’ve also seen it suck all the life out of people’s work, making articles feel like they could have been written by anyone (or no one at all). 

The issue is that AI, in its current iteration, lacks the human touch – the idiosyncrasies, personal anecdotes, and unique voices that draw you into a piece of writing and conjure images in your mind. AI chatbots are to writing what social media filters are to faces, giving a technically perfect sheen but in a way that makes everything they touch feel uniform and flat. So, in learning how to recognize AI-generated content, you simultaneously learn to identify content that’s technically fine but about as thrilling as a wet crouton. 

Here, I’d like to quickly note that I’m talking about content generated by free or low-cost tools like ChatGPT. I’m also talking about content that has either gone unedited or minimally edited. If you’re using a more sophisticated chatbot trained on specific content guidelines, these red flags may not apply. Likewise, if you use free or low-cost AI tools to generate ideas or first drafts, you’re probably already editing these red flags out of the final product. However, some of them may be slipping by unnoticed, so it’s still a good idea to know the words and phrases that are heavily overused in AI-generated content.

With the ‘why’ out of the way, let’s dive into the ‘how’ with 16 dead giveaways that something was written by a chatbot.

Was It Written By AI? Red Flags of AI-Generated Content

While it may be sophisticated and capable of perfect grammar, AI tools like ChatGPT often fall into predictable patterns that serve as red flags. Here are some common ones to look out for:

Eras, Ages, Landscapes, and Tapestries 

If an article starts with “In an era when…”, “In the digital age…”, “In the modern business landscape…”, or any other reference to eras, realms, ages, or landscapes, that’s your first red flag. This is particularly common with corporate content. Collect bonus red flag points if the text also refers to a tapestry. If it’s a rich or intricate tapestry, you’re knee-deep in AI-generated prose. 

Not Only… But Also 

AI has a tendency to overindulge in the “not only… but also” construction, particularly when it reaches the end of a paragraph. While skilled human writers employ this correlative conjunction judiciously, AI isn’t so discerning. Not only does it overuse the construction, but it also reliably fails to introduce anything unexpected or surprising with its “but also.”

As just demonstrated, humans use this correlative construction too (if sarcastically). Also, the concept of introducing something surprising after the “but also” isn’t strict. However, the structure can lead to long-winded sentences, so it’s generally best practice to reserve it for times when you have a connected but unexpected “but also” to add.

An Overabundance of Semicolons

In the 14+ years I’ve been working as an editor, I’ve never encountered a human writer who uses semicolons as frequently as AI does. One of the most common constructions I see is “X isn’t just a Y; it’s a Z.” E.g., “Growthocracy isn’t just a website; it’s a revolution in the making.”

There’s nothing technically wrong with this. The grammar is fine, and the construction works. However, the semicolon use gives a slight indication that AI may have been involved, especially if there are other red flags in the content.

The other issue is that the “X isn’t just a Y” phrase has long since been sucked dry by the marketing industry. Smartphones aren’t just phones, they’re gateways to infinite knowledge. Shoes aren’t just shoes, they’re innovation in motion. Nothing is what it is anymore – it has to be more. Brands proudly declare that “more than just an X; we’re a Y.” AI has heard of the revolution, and it has an army of semicolons ready to join in.

Pointless Verbs

AI often includes unnecessary verbs at the start of sentences. For example, it might say, “Utilizing technologies such as blockchain can enhance transparency.” However, “Technologies such as blockchain can enhance transparency” works just fine.

Corporate Speak

Speaking of “utilizing technologies such as blockchain”, AI tends to use corporate terms like “utilizing” and “leveraging” instead of simpler words like “using.” If it feels like your content is wearing the syntactical version of a cheap suit, it was probably stitched together by AI. 

Blockchain References 

Still on the topic of “utilizing technologies such as blockchain”, AI seems to be algorithmically aroused by blockchain. If blockchain is mentioned where it doesn’t quite fit, that’s a fairly good clue that you’re reading AI-generated content.

Perfect Replicas

If a plagiarism check reveals any exact replicas of published sentences, that’s a major clue that the text was written by AI. Here, I’m not talking about simple constructions like “Read on for all the details.” I’m talking about sentences so unique that it’s highly improbable two individuals just happened to construct them independently.

If a few replica sentences are highlighted, you’re in dangerous territory whether the content was AI-generated or not. The full impact of AI on academic integrity and workplace ethics is yet to be determined. However, it remains the case that plagiarised content can damage your credibility and destroy your SEO efforts. So, from a purely practical and self-serving standpoint, it pays to weed it out.

Everything Is Dynamic

If there’s one word AI loves more than “revolutionizing”, it’s “dynamic”, especially “the dynamic world of…” AI falls back on this phrase often, especially when setting the scene for a topic in the introduction. E.g. “In the dynamic world of business, effective financial forecasting is the cornerstone of strategic planning and long-term success.”

Everything Has Cornerstones

Speaking of cornerstones, that’s another word AI uses excessively. 

There’s lots of nestling going on

If something doesn’t have a cornerstone, it’ll most certainly be nestled into something else. Human writers use this word a lot too, but if you see it pop up multiple times in an article, that’s your sign that it may be AI-generated.

Please Sir, My Complexities, They Must Be Navigated

AI also seems to think humans spend a lot of time “navigating the complexities” of various activities. It’s not entirely wrong. Life is a multifaceted experience, and its environments  – both physical and figurative – do require navigation. But AI certainly favors that particular phraseology.

E.g. “Navigating the complex landscape of tax laws is a critical aspect of running a successful business.”

Lashings of Lifeblood

This one is as common as “cornerstone” but far more unsettling. Some examples:

“Cash flow is the lifeblood of any business.”

“In the ever-evolving landscape of technology, data has become the lifeblood of our digital world.”

Lack of Actual Lifeblood

Though it loves talking about lifeblood, AI-generated content generally lacks colour, uniqueness, and life. It can recreate stories that have already been shared online. It can even remix them into copy-pasted fictional tales if you ask it to. However, it tends to be more fact-focused. If stories or examples are featured, they usually lack the nuances and personal quirks that come with the real human touch.

Lack of Conversational Constructions and Sentence Fragments

I’ve worked with some of the writers in my team for around five years now. In that time, I’ve become so familiar with their individual writing styles that I can usually pick the author of a piece without needing to check. A small example: One of my writers uses an endless list construction when she wants to convey a sense of overwhelm. E.g. “Waking the kids up, prepping brekky, packing school lunches, waking the kids up again, last-minute homework help, school dropoff – a mother’s list of morning tasks can often feel endless.” 

It’s not totally unique to her, but I do see her use it a lot. More to the point, I’ve never seen AI come up with a construction like this because it’s not technically correct. There should be an “and” before the final item in the list. Leaving it off creates a feeling that the list is endless, but AI doesn’t understand this. Since it doesn’t understand, it sticks to grammatically correct constructions. So far, I haven’t seen any chatbots play with sentence fragments or other idiosyncratic ways to twist the rules of grammar. 

If a piece feels like it could have been written by anyone with a Grammarly account, that’s your giveaway that it’s the product of AI or a writer who didn’t really have their heart in the project. 

Signposted Conclusions

One thing high schoolers and chatbots have in common is that they’ve been taught to write “In conclusion…” when wrapping up a piece of writing. So, if you spot “In conclusion” or “In summary” in anything other than a high school essay, it was probably written by AI. Of course, statistically speaking, that high school essay was probably written by AI too. 

Lazy Phrases and Unnecessary Analogies

When introducing a topic, AI often picks worn-out phrases like “When it comes to…” It also introduces ideas with “Who knows?” far more often than any human writer I’ve encountered. And it loves concluding its final paragraphs with “Remember…”

In marketing copy, if you see “X is all about Y,” you’re either looking at AI content, lazy human writing, or a rough first draft. And in any content at all, if unnecessary analogies are used to describe straightforward concepts, that analogy was probably dreamed up by AI. 

Here’s an example that brings together a few red flags in one glorious concluding remark: “Remember, safeguarding your data isn’t a one-time event, but a journey. It’s like tending a garden; it requires regular care and attention.”

These red flags will likely evolve along with the development of AI, some dropping away as others emerge. It’s also worth noting that they’re not inherently bad or wrong. The issue is that, whether written by humans or algorithms, their overuse can lead to monotonous content that fails to engage.

Escaping the AI Echo Chamber

AI can be a powerful tool when applied appropriately. Where humans go wrong is when we use it to churn out bland, repetitive content. Instead of bringing each other valuable information or genuinely fresh takes, we just echo back our own worst writing habits and saturate the internet in clichés.

Since it only had existing human writing to draw from, all our laziest phrases and constructions are what stuck with AI. This makes its content bland and predictable, but it also makes it a valuable tool for weeding out our weaknesses. If the telltale signs in the list above are surfacing with dull regularity, you’re looking at content that, regardless of its origin, won’t inspire your audience.

By breaking away from the mundane and infusing some personality into your writing, you can create a distinctive voice that sets you apart from the vast sea of AI and human-generated mediocrity on the internet. After all, in the ever-evolving landscape of technology, adaptability is the cornerstone of progress and the lifeblood of our digital world, am I right?

Now that you know how to identify AI content, scroll back up to the intro of this article and see how many red flags you can spot. Then scroll back down here because your logical next step is to make sure you can spot logical fallacies. Our guide to logical fallacies is ready and waiting to help you brush up your critical thinking skills.

Leave a Reply

Your email address will not be published. Required fields are marked *