None of the big enterprise resource sources like Gartner/Forrester/HBR can pass an AI plagiarism check today.
The basic ruleset for basic AI detection today includes simple tell-tell signs such as:
1. Specific language construct – frequently starting sentences with “In summary”, or “In this episode, …”
2. Using em dashes actively (not available in standard keywords)
3. Certain words like “unleash”, “elevate”, “empower”, “delve”, “leverage”, “synergy”, “insights”, “esteemed”, “foster”, “nurture” that aren’t commonly used by people on a day to day
4. And more emojis (which are often necessary to improve readability online).
Most content publishing tools online processing text will fire a big red flag immediately after capturing 2 or 3 of these instances.
All of these, and then some, are broadly used across corporate leading organizations that we rely on heavily in the enterprise. I’m an HBR paid subscriber, rely on Gartner reports and Forrester analyst studies, and follow events as well.
We know that GPTs are trained on real data – often library collections of professional research studies and textbooks, among others. But it’s not an ancient language we’ve abandoned centuries ago.
It’s the “Thrive Through Volatility” in the Forrester header.
Or “In times of volatility, we believe that business can be a force for good in society.” in HBR’s latest webinar, featuring Amazon CEO Andy Jassy.
Or “Beyond Productivity” and “Leverage Trends”, used in the two upcoming Gartner webinars.
It’s not just singular words or terms used in there. Millions of corporate executives who spend their day-to-day offline, working with their executive teams, visiting clients, and attending professional events, aren’t exposed to the vast volumes of AI content present out there.
This narrative still works extremely well in enterprise.
And that makes it so much harder to tell AI generated from real research in a critical environment including VP and above roles in Fortune 1000 companies.

