SINGAPORE – “I’ve been wanting to see you for a long time,” reads a recent post on social media platform Threads, accompanied by an image of a woman gazing into the camera from the back seat of a car. “Where are you from? I’ll come to you.”
“I’m in Singapore, hbu?” one user replies, using the abbreviation for “how about you” and seemingly unaware that the woman is not real.
Identifying herself as Nicolette Smith, this woman is the face of a new breed of digital deception flooding social media.
The account follows a repetitive pattern, posting endless variations of the same posed photos, always paired with a suggestive caption, often in broken English.
One post abandons all subtlety: “Are you single, old man? I am looking for true love.”
The response is staggering.
Hundreds com…
SINGAPORE – “I’ve been wanting to see you for a long time,” reads a recent post on social media platform Threads, accompanied by an image of a woman gazing into the camera from the back seat of a car. “Where are you from? I’ll come to you.”
“I’m in Singapore, hbu?” one user replies, using the abbreviation for “how about you” and seemingly unaware that the woman is not real.
Identifying herself as Nicolette Smith, this woman is the face of a new breed of digital deception flooding social media.
The account follows a repetitive pattern, posting endless variations of the same posed photos, always paired with a suggestive caption, often in broken English.
One post abandons all subtlety: “Are you single, old man? I am looking for true love.”
The response is staggering.
Hundreds comment on each post, many of whom appear genuinely convinced. “I’m a retired single widower,” writes one user.
Another says: “I was married for 45 years when my wife passed away.”
Many comment with their locations, everywhere from Singapore to Brisbane to Bucharest.
This account began posting in August and has since amassed over 16,000 followers, but it represents only the surface.
“Pig-butchering scams”, often found on social media and dating apps, get victims to send money over time.
PHOTO: THREADS
Meta, the company behind Threads and Facebook, did not respond to a request for comment about this and other examples of inauthentic content.
Dozens of other accounts – many of them also going by Nicolette Smith – exist on the platform. Though many have negligible followings, others achieve similar levels of engagement with the same strategy while posing as both women and men.
It is also a sign of how engagement-maximising algorithms and the rise of new tools intertwine to deliver inauthentic material to users at scale.
Screenshot of a post from the Complaint Singapore Facebook page.
PHOTO: COMPLAINT SINGAPORE/FACEBOOK
On Complaint Singapore, a 260,000-strong Facebook group, a user reposts a claim that an 11-week-old baby girl died after receiving 20 vaccines, accompanied by what appears to be an AI-generated image.
The majority of commenters express disbelief at the blatant falsehood, but a minority embrace it.
“That’s why I am firmly refusing vaccine for my daughter since newborn,” writes one user. “But no vax no school in sg.”
The post originates from a Facebook page named Health and Happiness, which churns out a blend of AI-generated and stolen images and videos to over 470,000 followers. The page’s profile claims it is based in the United States.
Screenshot of a post from Health and Happiness’ Facebook page.
PHOTO: HEALTH AND HAPPINESS/FACEBOOK
One post claims that sleeping on your left side reduces the likelihood of heartburn. Another post makes a misleading claim that a 13-year-old has become the “first in the world” to be cured of terminal brain cancer – a story apparently sourced from reporting by newswire AFP, but sensationalised beyond recognition.
Screenshot of a post from Health and Happiness’s Facebook page.
PHOTO: HEALTH AND HAPPINESS/FACEBOOK
These are just a few of many examples of inauthentic content that proliferate in Singapore’s online spaces.
TikTok account @selelehsg racks up tens of thousands of views with AI-generated videos portraying dramatic scenarios in Singapore.
One video of an argument between a wet market stallholder and an old man racked up over 1.5 million views. Another depicting an argument between train commuters drew over 239,000 views.
Despite the caption, many of the over 1.5 million TikTok users who viewed this video did not realise it was AI-generated.
PHOTO: SELELEH/TIKTOK
Despite captions mentioning AI, many commenters do not realise the inauthentic nature of these videos.
Other video creators take on different approaches, often with less disclosure.
A video posted in a Singapore Facebook group features AI-voiced narration and a fictitious account of how Indian special forces took more than nine hours to respond to a terror attack in Mumbai, stitching together dozens of unrelated clips in the process.
The post draws “laughing” reactions and comments from netizens seemingly unaware of the inaccuracies.
The source: TikTok account @syinshyqer, an account which produces videos stitching together unrelated clips into a narrative with text-to-speech voice-overs of larger-than-life accounts of current affairs, many made up or exaggerated beyond recognition.
Its video of the Mumbai terror attack has drawn over 700,000 views since it was posted in October.
A TikTok video stitching together unrelated clips to create a false narrative about a bungled response to a terror attack in India.
PHOTO: EMRYS MORGAN LEBLANC/FACEBOOK
While much of this content might seem aimless and incoherent, it follows a broader economic logic.
When targeting lonely individuals, they are romance scams. One such manifestation is known as “pig-butchering scams”, where the scammer builds trust with the victim before extracting money in the form of cryptocurrency or fraudulent investments.
Researchers at the University of Texas at Austin tracked over US$75 billion (S$97.6 billion) in cryptocurrency flowing from over 4,000 victims into accounts largely based in South-east Asia between January 2020 and February 2024.
Such monetisation extends beyond direct fraud, as platforms also contribute to the misinformation economy.
In select regions, not including Singapore, TikTok pays creators as part of its Creator Rewards Program based on certain criteria, including level of engagement.
Products sold through its e-commerce arm, TikTok Shop, are also sometimes hawked by AI-generated videos, which earn a commission on each sale.
Other platforms, such as Facebook and YouTube, also reward video engagement with payment.
This has led to a proliferation of individuals – many of them based in developing countries – using tools like ChatGPT to produce such content at scale for platforms as a business model, according to reports by media outlets 404 Media, NPR and New York Magazine.
The payouts can be worth more than the annual wages of some careers in these locales. One content creator based in the Philippines told NPR in August that he had made US$9,000 in a month using AI-generated videos.
Screenshots of AI-generated videos on TikTok.
PHOTO: TIKTOK
While inauthentic material online is not always AI-generated, the rise of low-effort, AI-enabled content has been termed by online commentators as “AI slop”.
Over 10 million TikTok users tuned in to a video of an American pastor preaching with zeal: “Billionaires are the only minorities we should be scared of. They got the power to destroy this country.”
Another 52 million watched a video of a man on a street who rescues a baby falling out of a building above him. “The right man, at the right place doing the right thing. God in control,” reads one comment in response to the video.
Not all of these viewers wised up to the fact that these videos were made using Sora, OpenAI’s new video generation tool, which it began rolling out in September.
Some telltale signs include unusual cropping or blurring to hide the Sora watermark, or choppy editing to mask other indicators of AI.
To be sure, such AI-enabled misinformation predates Sora.
In April, following the issuance of the Writ of Election, a surge of AI-generated videos relating to the General Election was detected on TikTok. These ranged from videos with manipulated visuals of candidates to videos containing genuine footage of candidates coupled with AI-generated elements like an avatar or text-to-speech voice-over.
AI-generated images with freely available tools using Stable Diffusion.
PHOTO: ST FILE
Companies such as OpenAI, Microsoft and Adobe have introduced counter-measures, like invisible metadata attached to content generated by AI to indicate its provenance.
But the tool has been inconsistent in successfully flagging AI-generated content.
A Washington Post investigation published on Oct 22 found that when videos generated with Sora were uploaded onto eight major social media platforms, only one of them (YouTube) disclosed that it was AI-generated. That disclosure was hidden from view inside a description attached to the clip.
This is without considering the multitude of methods with which one can contravene such metadata, or the ways that online material can be inauthentic without the use of AI-generated visuals.
Part of the difficulty may stem from the complexities in discerning the fine line between inauthenticity and innocuous editing.
In 2024, Meta’s “made with AI” label was changed into “AI info” after criticism from photographers who said it had mislabelled their Photoshop-edited material.
Graphics editing software Adobe Photoshop, and many other tools used to make harmless changes to content, applies similar metadata to its output.
But the problem does not lie exclusively with the rise of new image manipulation or generative AI technologies. Such misinformation has long been intertwined with social media.
YouTube provides an illustrative example. The platform has long been criticised for hosting content farms that produce massive volumes of misinformation for profit – packaged as colourful but implausible cooking hacks, or pseudoscientific health tips that tap into the platform’s engagement-maximising recommendation algorithm.
More recently, Meta internally projected around 10 per cent of its 2024 overall annual revenue (or US$16 billion) comes from running advertisements for scams and banned goods, according to a Reuters report published on Nov 6. The company’s internal research from May also estimated that Meta’s platforms were involved in a third of all successful scams in the US.
Without resolving this innate tension of social media – to maximise engagement at scale, often at the expense of accuracy and well-being – inauthentic material is likely to remain a feature, not a bug, of today’s online spaces.
For now, it has spawned another popular sub-genre: videos by creators debunking viral misinformation.
In response to the AI-generated pastor clip, American TikTok creator Jeremy Carrasco, who covers AI in media, points out in his own video that a cursory glance at the account’s profile would have revealed its inauthentic origins.
“That basic research didn’t stop many big influencers from reposting this,” he says.