As Meta’s founders bet billions on AI to cure disease, David Sinclair warns that deepfakes are already spreading digital decay.
Artificial intelligence is rapidly redrawing the boundaries of science – and, it seems, of identity. Last week, Harvard geneticist and longevity researcher Dr David Sinclair found himself at the center of a controversy that felt more like science fiction than science communication.
The blurred line between innovation and imitation
Posting on X, Sinclair warned: “My body & voice have been replicated without permission to make money. If you don’t want to live in a world where you can’t tell real health info from fake, like or retweet this.” The following day, [he added another caution…
As Meta’s founders bet billions on AI to cure disease, David Sinclair warns that deepfakes are already spreading digital decay.
Artificial intelligence is rapidly redrawing the boundaries of science – and, it seems, of identity. Last week, Harvard geneticist and longevity researcher Dr David Sinclair found himself at the center of a controversy that felt more like science fiction than science communication.
The blurred line between innovation and imitation
Posting on X, Sinclair warned: “My body & voice have been replicated without permission to make money. If you don’t want to live in a world where you can’t tell real health info from fake, like or retweet this.” The following day, he added another caution: “Beware: An epidemic of fake ads & AI-generated videos claiming endorsement & providing bogus health & medical information. These criminals are putting people’s health & safety at risk.”
The warnings referred to a YouTube channel called Time to Thrive, which uses a digital likeness and synthesized voice of Sinclair to deliver videos “inspired by” his public work. The creators insist the material is fan-made and educational, but the result – a deepfaked version of one of the world’s most recognizable longevity scientists – blurs the line between homage and hoax.
Longevity.Technology: AI is simultaneously the field’s greatest accelerator and its most destabilizing force. Used wisely, it can illuminate the biology of aging at a speed and scale previously unimaginable – mapping molecular cascades, predicting therapeutic targets and compressing decades of benchwork into days of computation. Used recklessly, it can just as swiftly corrode public trust, fabricating scientists as easily as cells and feeding the very misinformation that longevity science has fought to escape. The Sinclair deepfakes are not merely an ethical irritant; they are a warning shot across the bow of a discipline that trades in credibility as its most valuable currency. Longevity cannot afford to become a hall of mirrors where avatars peddle pseudoscience while real researchers are drowned out by their synthetic echoes. The technology that could extend our healthspan must not shorten our attention span for truth – because if AI is to help us live longer, it must first learn to tell the truth faster.
A counterfeit crisis
The incident touches a raw nerve for the longevity community, which already wrestles with the public’s confusion over supplements, biohacking and science-backed interventions. Sinclair’s genuine research into sirtuins and NAD+ metabolism has inspired a wave of consumer interest in antiaging products; the arrival of a believable AI version of him – one capable of “speaking” new endorsements he never made – weaponizes that enthusiasm.
Sinclair told Longevity.Technology: “Misuse of my name and image to sell products is nothing new – it’s a constant battle for my team – but AI impersonation of a scientist is far more dangerous, risking public health and eroding truth itself.” His concern is not misplaced. In a digital landscape where synthetic speech and realistic avatars can be generated with minimal skill or cost, it becomes alarmingly easy to manufacture authority – and far harder for the public to distinguish evidence from imitation.
While regulators have begun to address deepfake pornography and political misinformation, there is little guidance specific to scientific impersonation. Yet in health and longevity, where reputations are often tied to consumer behavior and investment, the damage can be both reputational and real.
Biohub’s billion-dollar pivot
If one half of AI’s relationship with longevity looks troubling, the other appears transformative. On the same day Sinclair’s warnings were being shared across social media, husband-and-wife team Mark Zuckerberg and Priscilla Chan were announcing a rather different application of artificial intelligence – one that aims to cure, prevent or manage all diseases.
Posting on Facebook, Meta cofounder and CEO Zuckerberg said: “Ten years ago, we launched CZI, and we’re really proud of what we’ve built, especially the Biohub, where we believe that we’ve had the greatest impact.”
American pediatrician and philanthropist Chan continued: “When we started, our goal was to help scientists cure or prevent all diseases this century. With advances in AI, we now believe this may be possible much sooner.”
The couple, who are cofounders of the Chan Zuckerberg Initiative (CZI), which has already invested more than $4 billion in scientific research, explained that CZI will now channel its efforts into expanding Biohub – their network of research centers in San Francisco, New York and Chicago – as a new kind of hybrid organization “combining frontier AI and frontier biology.”
Zuckerberg explained in the same video: “We’re bringing together leading AI researchers, scientists, massive compute clusters, and the largest human cell data sets to create virtual cells and virtual immune systems to help advance science.” The ambition is to model human biology computationally, allowing researchers to simulate disease, immunity and cellular repair before running in vivo experiments.
The initiative will remain open source, with CZI pledging to make its data sets and models freely available to scientists worldwide. In an era when proprietary algorithms often hide behind paywalls, this openness may prove one of its most consequential decisions.
Between vision and vigilance
The symmetry is striking: in one corner of the digital landscape, AI impersonates a scientist to sell supplements; in another, AI is being trained to model biology and accelerate discovery. The same underlying technology – neural networks that learn, mimic and predict – is capable of deception or discovery depending on the hands that wield it.
It is, in many ways, the story of our era: Sinclair’s synthetic self contrasted with Zuckerberg’s synthetic cells – one eroding credibility, the other expanding possibility. The tension between the two reveals both the fragility and the promise of progress in the longevity space.
Zuckerberg and Chan’s project is undeniably ambitious; the notion that AI could compress the timeline for curing disease from a century to a decade sits somewhere between hope and hubris. Yet, unlike the deepfaked avatars populating social feeds, CZI’s work is rooted in verifiable data, institutional partnerships and transparency – the very ingredients longevity research depends upon.
For a sector driven by both science and public imagination, maintaining that distinction is vital. If the public begins to doubt the authenticity of the voices promoting genuine research, the credibility of the entire longevity ecosystem could erode faster than the telomeres it seeks to protect.
Truth as a biomarker
The longevity field has always been about more than years gained; it’s about integrity of process as much as persistence of life. AI will inevitably play a leading role in deciphering aging, from protein-folding predictions to personalized therapeutics, but it will also test our collective ability to separate signal from noise. The future of healthy lifespan may depend not only on how fast we can teach machines to learn, but on how carefully we teach them to tell the truth.