An app called “Wizz” has been making headlines lately for connecting minors with sexual predators. Many have described this app as a “Tinder for kids.” It’s the same iconic swipe right-swipe left functionality, and the same purpose of meeting up with strangers — only this time, targeted at both teens and adults.
What’s the result of this app design? A12-year-old girl meeting up with a supposed 14-year-old boy that Wizz connected her with … only to discover the “boy” was an adult male, who sexually assaulted her.
An 8th grader being [sexually abused by …
An app called “Wizz” has been making headlines lately for connecting minors with sexual predators. Many have described this app as a “Tinder for kids.” It’s the same iconic swipe right-swipe left functionality, and the same purpose of meeting up with strangers — only this time, targeted at both teens and adults.
What’s the result of this app design? A12-year-old girl meeting up with a supposed 14-year-old boy that Wizz connected her with … only to discover the “boy” was an adult male, who sexually assaulted her.
An 8th grader being sexually abused by a 27-year-old man, then finding out she was only one of several underage girls he had groomed through Wizz.
An 11-year-old girl being sexually assaulted by a U.S. Marine she met on Wizz.
All this in the last year alone. And there are many more cases.
As reports of Wizz facilitating child sexual abuse continue to pile up, something must change with the app itself and more broadly when it comes to online child safety.
Just a few years back, the National Center on Sexual Exploitation urged app stores to remove Wizz on account of the rampant sexual exploitation occurring on the platform. Within 36 hours, both Google Play and the Apple App Store agreed. Later on, Wizz was reinstated, with what appeared to be a number of new safety tools.
As time went by, however, it became abundantly clear that Wizz was not as safe as it seemed. If the continued reports of sexual exploitation weren’t convincing enough, the New York Post reported on what happened when the company’s safety tools were directly pressure tested.
Although Wizz claims to have robust age verification, a 52-year-old man said he was able to create an account as a 15-year-old. How? Because even though the age verification tech flagged this man’s profile for review,he said that Wizz moderators went ahead and approved it within minutes.
This is even worse than not having any age verification to begin with.
Wizz made claims of safety by boasting about tools like age verification — but behind the scenes, they actually directly overrode the concerns flagged by these tools.
When the National Center on Sexual Exploitation shared this story, Wizz reached out to us, saying the New York Post’s claims were false and that the 52-year-old’s account was never approved. So, what did we do? We tested the app ourselves. And our 28-year-old adult employee was able to easily create an account as a 16-year-old.
Wizz is currently enduring a fresh wave of bad press, and it has once again made changes on paper, such as increasing its lower age limit from 13 to 16 years. Yet the efficacy of its age verification still falls short.
Honestly, we are sick of this. We are sick of tech companies utterly neglecting safety until they’re threatened with serious consequences — and then announcing flashy, new tools as a publicity stunt, while having zero intention to implement those tools properly.
And it isn’t only Wizz. We’ve seen this with countless tech companies over the years.
Meta whistleblower, Cayce Savage, describes the pattern of behavior perfectly. At a congressional hearing on Sept. 9, Savage explained: “Meta has failed to prioritize child safety until they are scrutinized by outside regulators. Then, they scramble to develop features they know are insufficient and largely unused, and advertise this as proof of their responsibility.”
Further proving Savage’s point, recent research found that two-thirds of Meta’s safety tools for minor accounts were ineffective, with only 17 percent working as described by Meta.
The message is clear: Big Tech cannot be trusted to self-regulate. We can’t let them mark their own homework when it comes to safety. The only way to incentivize these companies to change is for Congress to pass legislation that holds them accountable if their products are found to be unsafe.
One particularly promising solution pending before Congress is the Kids Online Safety Act, a bipartisan bill that establishes a Duty of Care for online platforms likely to be accessed by children. What this means is that tech companies must take “reasonable care” to design those platforms with child safety in mind.
It’s a pretty basic concept. Before Subaru releases a new car, they must ensure it has functioning brakes, air bags and various other effective safety mechanisms. If they don’t, and someone is injured as a result, they can be held liable.
It should be the same for Big Tech. Currently tech is the only industry that does not have to worry about liability if its products are designed in unsafe ways.
Importantly, this bill does not impose liability for the mere fact that a child is harmed while using an online platform. It only establishes liability if the tech company has not taken reasonable care to prevent such harms while designing their product.
The Kids Online Safety Act has been carefully revised over the years after consultation with several minority and special interest groups, to ensure that it will not be misused for censorship. It has specific language in place to block any application that would violate free speech. It clearly lays out what harms are covered under the bill, to avoid any overly broad interpretation.
In short, this is an incredibly well thought-out bill that would advance child online safety in the narrowest way possible, anticipating and resolving all the common concerns people have when it comes to Internet regulation. This is a real solution that both Republicans and Democrats can get behind.
Congress has sat on this bill for three years, while apps exploit children and teens and get away with it. This is a solution long overdue.
Haley McNamara is executive director and chief strategy officer, and Lily Moric is communications and campaigns specialist for National Center on Sexual Exploitation, the leading national non-profit organization exposing the links between all forms of sexual exploitation such as child sexual abuse, prostitution, sex trafficking and the public health harms of pornography.
Copyright 2025 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.