4 min readJust now
–
I recently came across an article about how Deloitte used AI to help write a $290,000 report for the Australian government - and how that report ended up containing fabricated quotes and non-existent academic references due to AI hallucinations.
There’s nothing inherently wrong with using AI in professional settings (I’m in fact fascinated by how capable large language models have become). But certain applications of AI demand more due diligence than others. When AI starts influencing official reports, public policy, or journalism, small errors can scale into big consequences.
This got me thinking: how do we trust what we read?
That question led me to start exploring ways to detect AI-generated content. What began as curiosity turned into somet…
4 min readJust now
–
I recently came across an article about how Deloitte used AI to help write a $290,000 report for the Australian government - and how that report ended up containing fabricated quotes and non-existent academic references due to AI hallucinations.
There’s nothing inherently wrong with using AI in professional settings (I’m in fact fascinated by how capable large language models have become). But certain applications of AI demand more due diligence than others. When AI starts influencing official reports, public policy, or journalism, small errors can scale into big consequences.
This got me thinking: how do we trust what we read?
That question led me to start exploring ways to detect AI-generated content. What began as curiosity turned into something more serious when I realized how easily these tools can blur the line between what’s human and what’s synthetic. From news articles to essays, reviews, and even entire websites, AI-generated writing has become so fluent that most people can’t tell the difference anymore.
From Innovation to Authenticity
Over the past few years, the AI community has been focused on pushing boundaries — larger models, better reasoning, faster responses. But the next frontier isn’t just capability. It’s authenticity.
In education, students can now generate entire essays with a single prompt. In journalism, misinformation can be written convincingly by machines and shared across platforms within seconds. Even in business, fake reviews and testimonials can distort public perception.
The issue isn’t that AI is advancing — it’s that we lack the infrastructure to verify what’s real.
This realization is what led me to build AuthenAI**, **a prototype platform that detects AI-generated content. It’s still early, but the goal is clear: to help restore trust in digital communication. By identifying linguistic and statistical patterns unique to AI-generated text, we can create a small but meaningful layer of transparency in a world that increasingly depends on synthetic information.
Why This Problem Matters Nationally
The challenge of AI-generated misinformation isn’t just technical — it’s societal. Democracies depend on informed citizens. Educational institutions rely on honest learning. Businesses depend on authentic feedback and trust.
If we don’t build tools to verify authenticity, we risk losing confidence in the very systems that hold our society together. That’s why AI content detection isn’t a niche topic anymore — it’s a matter of national importance.
The Deloitte incident is a clear example. When a government-commissioned report can include AI-invented citations and a fabricated court quote, it’s no longer a question of productivity — it’s a question of integrity. Even trusted institutions can inadvertently introduce misinformation when they don’t have systems in place to verify AI-assisted content.
This isn’t an isolated event. It’s a warning. Authenticity verification is no longer optional; it’s critical.
The Technical Challenge Behind Detection
Detection of AI-generated text from a technical standpoint is not straightforward; many models write in styles that even reflect human variability. Some even paraphrase the outputs just to avoid detection altogether
While a few promising approaches have been proposed, from token-level probability distributions to watermarking at the model-training stage, none of these are perfect yet.
In building AuthenAI, I learned that detection needs to evolve at the speed of generation models. With each new model released, it shifts the linguistic landscape a little further: new patterns, new fingerprints, new challenges.
It’s a moving target, but one worth pursuing. That’s what makes this work exciting: it’s not just creating that one detector; it’s about coming up with a sustainable framework for authenticity in an AI-dominated world.
Building Toward a Trustworthy Future
As a software engineer, I feel that our responsibilities extend beyond just writing code that works as intended and efficiently. We are building systems that people will come to depend on — systems that will determine what they learn, and believe, and share with each other.
For me, AI detection is not an end in itself — it is but one piece of a much larger puzzle — building trust infrastructure for the digital world.
It is the type of work that takes me back to the reasons why I became an engineer, to work on important problems that matter to businesses and people.
In the end, I believe every AI will need its antithesis — some sort of way to verify, validate, and to provide accountability. Whether that is watermarking, provenance, or new AI detection algorithms — whatever they are, they will be viewed as equally important as encryption today.
Keeping Truth Intact
AI-generated content is here to stay. And it should stay! The potential for what new generative AI can provide is huge. But with the space for innovation, without truthfulness, progress will ultimately disintegrate the trust to innovate.
That is why I am spending some of my work in this space — researching, experimenting, and contributing to what I believe is going to be among the most important technical problems to solve this decade: what it means to keep truth “truthful” in this new world of artificial intelligence.
© 2025 Dovran Charyyev Founder, AuthenAI | Software Engineer at AWS