
The world is already exhausted from trying to figure out what’s real and what isn’t.(AI)
Summary
Call it a post-truth world or Reality 2.0, we’re in an era in which we don’t know real from fake.
One sunny day, there I was, being lowered into the deep blue sea in a cage, to look some sharks in the eye. Just as in the movie, Jaws, which I remember watching with my eyes closed and ears plugged with fingers. The sharks circled my cage, bumping it, rocking it wildly, battering it so they could get at me. One came close to severing a metal bar and breaking into the cage. My heart jumped out of my chest, and I screamed—prompting a visit from an alarmed neighbour.
The …

The world is already exhausted from trying to figure out what’s real and what isn’t.(AI)
Summary
Call it a post-truth world or Reality 2.0, we’re in an era in which we don’t know real from fake.
One sunny day, there I was, being lowered into the deep blue sea in a cage, to look some sharks in the eye. Just as in the movie, Jaws, which I remember watching with my eyes closed and ears plugged with fingers. The sharks circled my cage, bumping it, rocking it wildly, battering it so they could get at me. One came close to severing a metal bar and breaking into the cage. My heart jumped out of my chest, and I screamed—prompting a visit from an alarmed neighbour.
The sharks around me were only virtual, circling around in my new Samsung virtual reality (VR) headset. But I’m not exaggerating when I say this was a real traumatic experience. I no longer step into the real sea.
Reality mistrust
Swimming with sharks in 2015 was a simulated experience that I sought. But today, unreal content seeks you. You can no longer believe your eyes or ears—and they’ll probably come for the rest of your senses soon. Already, some studies say AI-generated written content has overtaken new human-written content online. It’s predicted that in 2026, AI content will make up 90% of the internet. It isn’t just what you read, but videos, images, news stories, art, and everything in between. It’s easily faked and used by humans with an agenda. No industry is immune.
The other day, I was relaxing with some streaming music when a beautiful voice singing a beautiful song found me. Through my Soul was very much the kind of music I love—bluesy soul from the 50s. Now, I have many hours of playlists of what’s called ‘Whiskey Blues’, and I wondered how I had missed this artist, Enlly Blue. I found, to my amazement, she had several albums and singles out. I decided to find out more about Enlly.
What I found shocked me. The woman with the warm, velvety voice wasn’t human. She was an ‘AI project’ with an algorithm where a heart should be. She, and others like her, is produced by a Vietnamese individual who is making money from the project. I felt betrayed, cheated, and saddened. I had created a playlist of her songs and some by others, but now the lyrics rang hollow. I stopped listening.
Enlly Blue is just one drop in the ocean of AI-generated content.
Malicious motive
On any random day, you will now bump into many items of fake content. It could be a phone scam, since voice cloning has become completely democratized. Or it could be a fake video with a payload of misinformation.
While on YouTube this week, a video threw itself up to me featuring the arrest of a San Diego federal judge, Keisha Langford. It showed how a seemingly racist police officer harassed and arrested a black woman. All she was doing was loading her own shopping into her own car. I watched for quite a while before I came to my senses and figured no one could be so stupid as to be shown government ID and still arrest a judge. I looked for Keisha Langford online. She didn’t exist. The video was just a bit of outrage engineering.
The French coup
Deepfakes, of course, can do very real and serious individual and societal damage. On the personal front, it can be pornographic or other content that can be used to bully and blackmail. On the wider societal front, it can make a difference to elections, governments, and politics.
In December 2025, a hyper-realistic video showed a woman journalist in front of the Eiffel Tower announcing that a colonel had seized power and that President Emmanuel Macron had been ousted. The video amassed over 13 million views and was so viral, the head of an African country called Mr Macron to ask if everything was alright. The coup d’état turned out to be the result of a teenager from Burkina Faso tinkering with Sora2 for a few hours. He just wanted to go viral.
The world is already exhausted from trying to figure out what’s real and what isn’t. If it’s so bad now, what happens in 2026?
One school of thought is that we will stop asking: “Is this real?” How many times a day can one do that, after all? We will instead ask, “Does this matter to me?”
Some people see a long-overdue backlash coming against fake content, with switching off from technology becoming more and more sought-after. I rather doubt it, personally. Another school of thought says we will look for a made-by-humans mark in everything and for proof that what we’re seeing is actually showing signs of a real heartbeat.
There’s something truly absurd about humankind inventing a technology so smart that it can imitate us to perfection, only to have to invent more technology to catch that technology out and expose it. In 2026, let’s see if we can reclaim the shared reality we used to know.
The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.
Mala Bhargava is most often described as a ‘veteran’ writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience.