Support WBUR
36:19
ADDS REFERENCE TO FACIAL RECOGNITION APP - Federal agents use a facial recognition app on a person detained and who later released on Tuesday, Jan. 27, 2026, in Minneapolis. (AP Photo/Adam Gray)
From your online browsing habits to traffic cameras on your commute, data about you is everywhere. And with AI, companies can gather, store and share detailed information about you faster than ever.
Guests
Beryl Lipton, senior investigative researcher at the Electronic Frontier Foundation.
**Jason Koebler, **co-founder of 404 Media.
Also Featured
Pedro Chavez, immigration lawyer.
*The version of our broadcast available at the top of this page and via podcast apps is a condensed version of the full show. You can listen to the full, unedited …
Support WBUR
36:19
ADDS REFERENCE TO FACIAL RECOGNITION APP - Federal agents use a facial recognition app on a person detained and who later released on Tuesday, Jan. 27, 2026, in Minneapolis. (AP Photo/Adam Gray)
From your online browsing habits to traffic cameras on your commute, data about you is everywhere. And with AI, companies can gather, store and share detailed information about you faster than ever.
Guests
Beryl Lipton, senior investigative researcher at the Electronic Frontier Foundation.
**Jason Koebler, **co-founder of 404 Media.
Also Featured
Pedro Chavez, immigration lawyer.
The version of our broadcast available at the top of this page and via podcast apps is a condensed version of the full show. You can listen to the full, unedited broadcast here:
Transcript
**Part I **
MEGHNA CHAKRABARTI: Who’s the last person who took your picture? Do you think it was a friend or a family member or maybe a selfie? If you’re in your car right now, the most recent picture taken of you is probably by a traffic camera on your commute. If you walked to a subway station or a bus stop, count how many doorbell cameras, or closed-circuit security cameras you pass on your way home.
You might be thinking, Hey, nobody’s looking for me. So who really cares if the cameras are always rolling? If you happen to live in one of many U.S. cities where AI enabled cameras have been deployed, it doesn’t really matter if someone is watching, because something already is.
JON GAINES: A few weeks ago, using a commercial search engine, I very easily found the administration interfaces for dozens of Flock Safety Cameras. I shared this information with 404 Media and with Jon Gaines’ help, that number quickly grew to nearly 70. None of the data or video footage was encrypted. There was no username or password required. These were all completely public facing for the world to see, and some of them still are.
CHAKRABARTI: That’s Benn Jordan, a YouTuber and researcher, who, as he said, partnered with the online news outlet 404 Media to expose major vulnerabilities in Flock Safety Cameras. Now Flock Safety sells cameras, drones, and even more complex surveillance systems to law enforcement, schools and other civic organizations.
Jordan says they found that at least 60 of Flock’s AI enabled condor cameras around the country had been left completely open to the internet, exposing live video feeds and administrative controls that allowed literally anyone to view footage. Download a month’s worth of recordings and even access system settings and diagnostics.
The Condor cameras use AI to identify humans and follow their movements, and the footage is at a high enough definition that you can see cracks in the sidewalk or a dent in a car. Even the details of pedestrian outfits, Jordan calls it Netflix for stalkers. And in just a few minutes we’re going to talk to Jason Koebler, a co-founder of 404 Media who investigated the Flock Camera leak.
But first, as we love to do here. I want to go back in time a bit to understand the path this country took to get to the point where AI is watching you.
GEORGE W. BUSH: We’re dealing with terrorists who operate by highly sophisticated methods. And technologies, some of which were not even available when our existing laws were written.
*The bill before me takes account of the new realities and dangers posed by modern terrorists. *
CHAKRABARTI: That’s then President George W. Bush, just one month after the terrorist attacks of 9/11. So this is from October of 2001 at the signing of the Patriot Act.
BUSH: Surveillance of communications is another essential tool to pursue and stop terrorists.
Existing law was written in the era of rotary telephones. This new law that I sign today will allow surveillance of all communications used by terrorists, including emails, the internet, and cell phones. As of today, we’ll be able to better meet the technological challenges posed by this proliferation of communications technology.
*Investigations are often slowed by limit on the reach of federal search warrants. *
CHAKRABARTI: The Patriot Act ushered in a completely new era of domestic security. It expanded the federal government’s authority to conduct surveillance and other investigative efforts related to the War on Terror. It gave the government legal ability to search and watch and record more people with fewer restrictions and minority communities around the country particularly faced more scrutiny.
They were consistently watched by federal agents more closely. And while most of the provisions, excuse me, of the original bill have expired, The Patriot Act set the stage for American surveillance as we know it today. The continuing expansion of law enforcement’s ability to surveil people. Now, however, we’re at a new stage yet again.
Because AI is making that surveillance broader, deeper, and much, much faster without updated legal frameworks for appropriate oversight of who uses this new technology and how. So that’s what we’re going to talk about today, and we’re going to start with Beryl Lipton. She’s senior investigative researcher at the Electronic Frontier Foundation, that’s a nonprofit focused on protecting digital rights.
Beryl, welcome to On Point.
BERYL LIPTON: Thanks so much. I appreciate having the time to inform others.
CHAKRABARTI: So tell me a little bit more about what is the important historical context that has actually set the stage for this very rapid adoption of AI in surveillance equipment.
LIPTON: I think what President Bush said there in that clip that you played is echoed in the state that we are currently in, which is that a lot of the laws that would otherwise protect us were created before the time that we’re in now. And so the technology has really outpaced the legal protections that individuals in this country have. The Patriot Act really did help to set the tone for how on the state level and federally, laws are or aren’t applied around the collection of individual data from a lot of different sources.
And we’ve seen in the consumer space that technology is now mediating, the internet and digital interfaces are mediating so much of our experience in a way that wasn’t anticipated in the analog times.
CHAKRABARTI: Okay, so I’m about to ask you a question, which I promise you I’ll come back to again and again, because it is the question when we talk about striking the right balance between keeping the country secure and digital privacy. Okay, so AI is extraordinarily good at doing things extremely fast and at scale, right? And then depending on what you ask it to do, pulling data or conclusions about what it’s seeing at speed and at scale. I already know a lot of people out there are just going to ask the question.
So what? If it helps us stay safer and you are not a person that’s a terrorist, or being caught on camera committing a crime, why wouldn’t we actually welcome this?
LIPTON: There are lots of reasons why an individual might want to exercise their own privacy, and I think we can look at just the human experience.
It’s incredibly important for us to be able to curate our own experiences and the sorts of things that we share with any individual, including the government. And I would like to push back against this idea that it is actually going to make us safer in all cases. There’s good reason why within our legal frameworks, within the justice system that we are supposed to have, there are certain pieces of information that are supposed to be admissible and accessible by the government or by other actors and other pieces of information that they shouldn’t have access to, and that they have to make strong legal cases around in order to get.
And what we have now is a system that is again, facilitated by the consumer space that allows law enforcement or the government to get access to information that they would’ve otherwise never been able to get access to at all.
CHAKRABARTI: Can you clarify that or tell me more about that?
Because I was thinking after 9/11 and the immediate ramp up that we saw in security efforts across the country and then of course that was redoubled with the passage of the Patriot Act. It was all; it was like physically palpable. That there was a greater security apparatus in the country.
Obviously, airports are the most relatable aspect of that. So from the start, there was quite a bit of concern about government overreach. Because you couldn’t help but to see it happen. With AI, it feels different, right? Because we’re talking those Condor Cameras, for example.
Like they just look like regular cameras, but their ability is actually quite invisible to the average person. So it seems to me that would possibly make it harder to actually really fathom what it is about you that’s out there or that these companies are watching, and maybe that might be a thing that’s producing less public concern than we saw, let’s see, even after the Patriot Act.
LIPTON: Absolutely. It’s not as visible. Something I go back to pretty regularly is you would understand why you would have a privacy concern about a stranger following you around the grocery store or in your day-to-day errand running, and taking notes about what it is that you’re doing and storing those notes somewhere else.
That’s essentially what is happening through the technological apparatus that we engage in and that we have to engage in at this point in our society, because so many things are medical records. The shopping habits that we have, our communications all go through these communication or these digital systems.
So I think what we’re seeing too is that it used to be that perhaps you would see somebody on the street and you would hold that in your memory. But what these systems facilitate is the ability to store that information, possibly indefinitely, until somebody else decides that they would like to access it or search it.
And it’s very searchable at this point, and it makes connections between the disparate pieces of data that we leave in our wake that later on can be used to identify us or to create entire profiles of us, just like we see again in the consumer space when we see advertisements and other sorts of targeted products pushed our way.
CHAKRABARTI: Okay. So it’s akin to corporate surveillance essentially.
LIPTON: Yes. And in a lot of ways, there’s a massive overlap between government surveillance and corporate surveillance. Because there are rules around how law enforcement can get access to information about any particular individual. In the corporate space, because we consent to cookies or be on our web browser, because we have downloaded mobile apps. There’s this idea that we as individuals have consented to the sharing of particular pieces of information about us, and that information is available on the commercial market.
And law enforcement and the government have taken advantage of this and have said, if any random corporation can have access to this sort of information with a credit card, why aren’t we able to buy that information and use that for our public safety purposes?
There’s a massive overlap between government surveillance and corporate surveillance.
Beryl Lipton
CHAKRABARTI: And then cross reference it with the data that they’re gathering through, their own surveillance systems?
LIPTON: Absolutely. Because there are so many cameras and microphones that are being put into our public spaces at this point.
CHAKRABARTI: And none of this requires a warrant.
LIPTON: It does not require a warrant for the most part.
CHAKRABARTI: So they’re able to, so basically just like corporations, we’re at an age with AI that law enforcement is able to, if they wanted to, construct a very detailed profile about just about anyone.
LIPTON: Absolutely. A few years ago, Electronic Frontier Foundation conducted an investigation into the use of data brokers by police agencies.
And in the emails that we received through public records requests, there was this sort of lip service that was paid towards the idea that some of this information would be anonymized. Which is a false concept. Because it’s very easy to de-anonymize information. But what they said in some of those emails was if we’re any good at our jobs, we’re going to be able to re-identify these people easily anyway.
Part II
CHAKRABARTI: Let’s take a moment to listen to some of the promotional materials used by companies that are creating these new AI enhanced surveillance technologies.
So in this clip, now you’re going to hear a mix from several companies. You’ll hear Flock Safety promoting a license plate reader equipped drone. So the drone has a license plate reader on it that can track people from the sky. You’ll hear NEC with a new tool that can use images of tattoos to identify you. And Axon, which makes police body cameras.
Their ad demonstrates a product that it says allows officers to ask AI in real time whether something that they wish to do is within their policy.
(AD PLAYS)
CHAKRABARTI: So once again, a few examples there of promotional materials from these technology companies that are using AI to enhance surveillance equipment.
Let me bring Jason Koebler into the conversation now. He’s co-founder of 404 Media and he wrote an article recently titled Flock Exposed Its AI-Powered Cameras to the Internet. We Tracked Ourselves. Jason, welcome to On Point.
JASON KOEBLER: Hey, thank you so much for having me.
CHAKRABARTI: So you tracked yourself. What does that mean?
KOEBLER: I did. So I worked with the YouTuber, Benn Jordan, who you mentioned in the intro. And we learned that about 60 of these Flock Condor cameras were streaming directly to the internet with no password required. I learned that there were two of them exposed in Bakersfield, California, which is a couple hours from me.
I spent an afternoon watching people walk by outside of this Macy’s mall. And I watched these cameras zoom in on people, like people wearing hoodies. The camera would automatically track them as they were walking through the parking lot. And I thought, I could just go up there and watch myself.
Which I thought was like a visceral way of demonstrating the issue. And so I drove up there, drove to the corner that it was on, that one of these was on. And I walked by and I was watching myself in real time. I was able to record the camera recording myself, which was a surreal experience, but it’s something –
CHAKRABARTI: You turn and wave at the camera.
KOEBLER: I do turn and wave at the camera. And it was weird because it did feel very visceral, but at the same time as Beryl and you both mentioned, this is happening to us passively dozens of times a day, hundreds of times a day depending on where you live and what you’re doing.
CHAKRABARTI: And in the case of these Condor cameras because at the time they were just streaming to the open internet. You write that your colleagues hundreds of miles away were also watching that feed at the same time.
KOEBLER: Yeah, they were watching, and again, I was watching myself on this camera that had a pretty wide view, but it did zoom in.
But one that was really alarming was there’s a bike path in suburban Atlanta that had three of these on it, and I was able to watch a rollerblader go from one camera to another camera. And the camera was so detailed that I was able to see, like he stopped underneath one of the cameras and I was able to see what he was doing on his phone through this camera.
I watched a woman walk her dog, the dog stopped to go to the bathroom, and it was zoomed in on them. Like I could have easily probably used facial recognition technology to identify exactly who they were. And so these are really high resolution cameras. And again they have been networked together, which I think is important.
They’re talking to each other. You can track people from one camera to another. And they also, they’re streaming to a police command center, or in this case, directly to the internet.
CHAKRABARTI: Okay. I want to talk to you a little bit more about some of the, let’s say, the operational decisions that go into how law enforcement runs these cameras.
But tell me just a little bit more, these are not designed to track vehicles, right? They’re just, these Condor cameras are designed to track people. But how did it happen that 60 of them were just dumping what they were seeing on the open internet? How did that happen?
KOEBLER: Yeah, Flock Safety, the company that makes these says it was a misconfiguration essentially, that they were testing the cell network that normally these used to stream to somewhere that is more secure. And that a password was not required for whatever reason. And so this is not supposed to happen. We have seen time and time again; different surveillance companies deploy technologies and maybe they don’t have the correct security safeguards and things like this happen from time to time.
But I think that in this case, it was an example of just being able to see the types of footage that police can have in real time.
CHAKRABARTI: Okay. One more quick question about, I presume that this leak has essentially been stopped now. I hope.
KOEBLER: Yeah, they closed it before we published the article. Otherwise, we wouldn’t have published. Because people could go and find them and some of them were at playgrounds, some of them were at malls, parking lots, street fairs, things like that. And so we wanted to be thoughtful about when we published.
CHAKRABARTI: Yeah. Point taken. Okay. So again, back to the fact that these are designed to track people. And you said, oh, the camera can follow a roller blader, for example. In a complex scenario when there are multiple people, how does it decide who to follow?
KOEBLER: We actually don’t know.
And that’s one of the concerning things. I think I watched a few like training videos that Flock has given to police about this. They post them on the website for customers who are interested, and these cameras can be controlled manually. So someone in a command center can pan, tilt and zoom them, is what, they’re called PTZ cameras.
And so you can change where it’s pointed and you can zoom in and you can move them up and down. But they have only recently introduced this AI feature where it automatically detects people who are walking by and it zooms in on them. And presumably it’s using some sort of algorithm to determine like maybe who the most active person in any given scenario is, if someone is running versus walking.
But that’s just speculation for me. I think that is one of the issues here is that we like these technologies are being deployed, but it is not always clear exactly how they’re working, what their algorithms are telling them to do, and how police are using them.
CHAKRABARTI: Yeah. Beryl, let me bring you back in here. Because of course, the counter argument to what Jason just said that would come from law enforcement is we don’t want people to know how we, like who we select to track. Because then the bad guys would just, that would be an easy workaround for them, would wear different things, you know what I’m saying? It’s like we don’t want people to know how the technology works to prevent exploiting that knowledge by potential criminals.
LIPTON: I think we have a due process and a justice system that is meant to safeguard us against law enforcement being able to just track any person that happens to catch their whim.
And so I understand that is going to be a part of the public safety argument, but when so many people are being captured by these sorts of systems, when there is widespread access to some of these systems, that is totally inappropriate. It’s important that law enforcement practice some transparency around the appropriate uses.
And I think something that is really important is that because there aren’t clear guidelines oftentimes within police departments or legal justifications that are necessary from the state level or from legislation. They don’t necessarily have to explain or have a clear reason. They can use this blanket idea that almost all surveillance is some sort of legitimate investigatory purpose.
CHAKRABARTI: Okay, so that’s a really important point. No guidelines, at least as of yet. Is this not common in the world of surveillance? That at first there are no guidelines, but those guidelines tend to slowly form when people take the government to court.
LIPTON: Absolutely. And we shouldn’t have to wait on individuals or people from the public taking the government to court.
Unfortunately, what we have seen, again because there is such a profit motive and there is such a strong ability to use public safety as a way to acquire some of these systems. We hear from local police departments that they don’t want to take the time to set up some of these policies before they adopt some of these technologies, which seems incredibly backward.
It’s really important. If it’s important enough to adopt, it’s important enough to set clear guidelines and reasons for use and also consequences for misuse, which is I think another thing that is really missing in the lack of policies.
CHAKRABARTI: Okay. So basically, right now until something unfortunate happens.
We have no idea if these cameras are told to track people who, you know, have dark skin and wear their pants low. And we just don’t know. And we’ve seen in some cases with license plate readers or with some of these other individual technologies, that law enforcement individuals have used them for their own personal purposes in ways that are clearly abusive, that are clearly stalking and are clearly not related to an investigatory reason.
CHAKRABARTI: Okay. So Jason, let me come back to you because we could talk about many examples of how AI is enhancing these surveillance technologies, but there’s one in particular that I want to learn more about from you, and we’re going to start with some sound from a resident. This is from a video that was taken by a resident in Minnesota, and what you’re going to hear is an ICE officer asking this person to show his face so that the ICE officer can scan the person’s face with something called Mobile Fortify.
Now, if you watch the video, one officer raises his phone like right into the person’s face as he asks him to take off his hood. And another officer’s screen actually on his phone shows that he also has the app open.
(VIDEO PLAYS)
CHAKRABARTI: So Jason, what is Mobile Fortify?
KOEBLER: Mobile Fortify is ICE’s new favorite toy. It is a cell phone app that is hooked up to government facial recognition databases. It essentially just looks like a camera and so you open it up.
And ICE sticks it in someone’s face, as we just heard on that audio. And it is supposed to essentially show who that person is. We’ve seen many videos of ICE using these in the interior of the country. So not at a border in Minneapolis and Chicago in Los Angeles. And we are aware of at least one case in, I believe it was Oregon, where I stuck this camera in someone’s face.
It returned the identity of a woman who was not the woman that they had in custody. They tried it again. It returned the identity of a second person who was not the woman. And this technology is not always accurate.
We’re also aware of several cases in which U.S. citizens have been scanned and in its sort of official guidance, ICE has said, or the Department of Homeland Security has said that it trusts the judgment of this Mobile Fortify app over things like a person’s birth certificate.
CHAKRABARTI: Say that again, Jason.
KOEBLER: That basically what the app returns is more valid than if someone were to have a birth certificate that said that they were born in the United States. That is the guidance that they’ve been given to ICE agents. So not that people are carrying around their birth certificates, although maybe –
CHAKRABARTI: That’s the whole point.
KOEBLER: Exactly, yeah. We live in a country where you don’t need to carry around a passport. You don’t need to carry around a driver’s license to exist in public, or at least you’re not supposed to.
CHAKRABARTI: Sorry. Every once in a while, someone says something on this show that really just takes me aback, and that is one of them.
Jason, but let me understand a little bit more clearly. So the ICE officer holds the camera or the phone with the Mobile Fortify app on it. Is it supposed to return an identification like almost in real time or no.
KOEBLER: Yeah, essentially, it is searching various government facial recognition databases.
One that it searches is when you enter the country via a border at an airport, for example. Your picture is taken and that is added to this database and it checks against that. We believe that it’s been connected to other government databases, although we’re not sure exactly.
You know, how many different databases that it is checking, but it is supposed to return the identity of someone essentially in real time.
CHAKRABARTI: Okay. So again, to be fair and rigorous in our analysis here, you identified some cases in which it was not returning an accurate identification of a person.
Are those just certain drops in the bucket? And the bucket is mostly full of accurate IDs? I just want to put things in the proper context.
KOEBLER: I think that you’re asking one of the right questions. But it’s really hard to say, because it’s not like ICE is advertising how often it’s using this. It’s not advertising its success rate. There are actually two senators who wrote a letter to the office of the Inspector General of Department of Homeland Security earlier this week and said, we need this sort of information to know things like accuracy rates and how often this is being used to detain people.
We have no idea. We have no idea. We learned about this technology because it was buried in a federal registrar somewhere. But the way that we actually learned how it worked is because we got leaks from people who had used it from within the government, but also because we’ve seen videos of ICE agents using this that were filmed by people who are in the area.
And so it’s not like they’re advertising how this works and its accuracy rate and all that sort of thing.
CHAKRABARTI: So once again, as you said, we just don’t know. Beryl, chime in here.
LIPTON: I think this can be used to, does the government have the right to identify us in real time when we’re on the street?
And I think something else that you know is really highlighted by this situation is that there’s this strong outsourcing of our rights and common sense on the part of law enforcement to some of these technologies. If they’re going to trust that their possibly flawed system is more important than all of the history and documentation that we have, where does that leave us as individuals?
CHAKRABARTI: I’m thinking that as you said, does the government have the right to try to identify you when you’re on the street? The Fourth Amendment prohibits illegal searches, right? Then essentially, this is a search for your identification and there’s an entire question of is there any kind of probable cause around this other than, like, you’re just on the street in a place that ICE is.
LIPTON: Exactly. And I think that we hear a lot about this argument that just you’re in public and so you don’t have any right to privacy. But again, we’re in this time where it’s no longer an ephemeral experience or a transient experience to be in a public space. This can now be connected to an entire, basically, dossier of information about us that has nothing to do with somebody being in real time in the same place we are.
Part III
CHAKRABARTI: For many people who’ve already been under government scrutiny, this just makes the surveillance even more intrusive.
That’s at least according to Pedro Chavez. He’s an immigration lawyer who’s based in California, and he went into immigration law thinking that he would not have to deal with tech law. But now the impacts of AI on his client’s cases is top of mind.
PEDRO CHAVEZ: Forget a day or two. I think an intrusion into your privacy of one second is too much.
CHAKRABARTI: And Chavez says that intrusion disproportionately impacts Hispanic men right now, even if they’re U.S. citizens. Thanks to a 2025 Supreme Court case called Noem v. Vasquez Perdomo. In a 6-3 decision, the court reversed a district court ordered that would have prevented ICE from detaining people based only on their ethnicity.
Justice Brett Kavanaugh wrote that the nation could trust ICE agents to, quote, use common sense in their sweeps.
CHAVEZ: The Supreme Court decision that now has made it legal for ICE to come up to you and ask you questions based on the color of your skin, your accent, and where you work, that means that my rights as a Hispanic man are lower than someone who is white.
Like as of that decision, my privacy rights just decrease. Just, it’s not a lot. It’s not a lot, it’s just they come up to you and they can ask you a question just based on your race. I feel like that question, that five second interaction, because it takes me two seconds to say, I’m a U.S. citizen with a perfect accent and they will believe me and they will let me go.
But that intrusion of just like two or three seconds of me having to say I’m a U.S. citizen is something that a white male does not have to go through. It is a decrease in my privacy rights. It’s a decrease in what I am as an American citizen. Some American citizens have higher privacy rights than mine.
CHAKRABARTI: By the way, it should be noted that this case came off of the Supreme Court’s Shadow Docket, essentially an emergency ruling that does not require public oral argument before the justices make their decision. The ruling also completely overturns a half century old case called U.S. v Brignoni-Ponce.
Now, that case, again 50 years ago, found that it is unconstitutional for, quote, roving patrols to stop cars near the border, simply based on the passengers. And if they, quote, appear to be of Mexican ancestry. So that’s from a half century ago. After this 2025 case, Attorney General Pamela Bondi said, quote, now ICE can continue carrying out roving patrols in California.
So back to this issue of AI. There’s one AI tool in particular that ICE is using to highlight this. We just talked about it. It’s called Mobile Fortify.
CHAVEZ: So right now, ICE is using Mobile Fortify, so is CBP. And yeah, if they scan your face, if you have any interaction with the border or if you applied for an immigrant benefit in the past, then odds are that your photo is in their system.
If you entered outside from outside the inside crossing into the border, you pass support of entry, your face was taken, your information was taken down, and that will be in their database.
CHAKRABARTI: Now that more and more data on every person is out in the world and the government can and is using AI to explore that data, the advice that Chavez used to be able to give his clients just isn’t enough anymore.
CHAVEZ: Everything you’ve used is accessible, and if AI is involved, scraping it, then there’s possibility that you will come out positive for something. So let’s say you’re in the United States under a tourist visa, and you were working. In the past, if they wanted to hide that, then you just delete your Instagram, you delete your Facebook, but that doesn’t work anymore.
Or at least we anticipate it won’t work in the future, because AI will be able to scrape these things. Right now, we are limited on human ability to digest all of this data, and that means that, okay, yes, they have access to your Instagram from five years ago. It doesn’t matter if you delete it now.
But just because they have access to all of these posts you have, all of this stuff, will they find that picture of you working at McDonald’s when you should have been here going to Disneyland? A human? Probably not. Because it’s so much data, but an AI will find it.
CHAKRABARTI: And Chavez says, deleting social media posts doesn’t work.
Avoiding the internet altogether won’t even work anymore. Because data leaks happen even when people aren’t willingly making information public. So now that the assumption amongst advocates is that AI will be used to search people’s entire digital footprint, Chavez encourages anyone who thinks they might be at risk of scrutiny to take an analog step to protect themselves: meet with a lawyer.
CHAVEZ: Whether you’re a U.S. citizen and trying to protect yourself from ICE, whether you’re a legal permanent resident, what obligations you have to hold your card, whether you’re here completely undocumented for 50 years and you’re trying to protect yourself from expedited removal.
Every single case is distinct. And a 30-minute consultation with an immigration lawyer that specializes in deportation defense is crucial. Because that lawyer, in just that 30 minutes, might be able to come up with a strategy for you that’s specific to your case. Because every case is different.
CHAKRABARTI: That’s Pedro Chavez, immigration lawyer, who is dealing with the growing threat that he says is posed by AI to his clients. Beryl Lipton, do you have a response to what you just heard?
LIPTON: I think it’s really important that we’re thinking about the permanence of some of what we have put on the internet.
And this point about the access to information that nobody was ever supposed to have access to in the first place because of data breaches and other types of leaks. There are entire companies that are set up whose entire MO is to vacuum up that information and then repackage it for law enforcement or store it until it becomes useful to some buyer later down the line.
And so for everybody who thinks that they don’t have anything to hide, why would anybody come for me? Why would anybody be collecting my information? It’s because they can and they’re going to figure out what to do with it later. And it’s a huge security risk to have that information retained indefinitely for search purposes that are completely outside of your control.
CHAKRABARTI: Jason, let me ask you this question. We’re talking about all the data that’s out there on people already, and how AI can just really, enhance might be too mild of a word. But enhance law enforcement’s ability to scrutinize that data. But we have to acknowledge billions of people are willingly putting a lot of that data out there.
I think we could make an argument to say that a lot of people just, they may recognize the surveillance risks. But on the other hand, the benefits that we’re getting from this exact same technology, from AI, from all the digital tools that we use every day, far outweighs the, for most people, the minor risk, let’s say, that they might get misidentified by the government. What do you think about that?
KOEBLER: That’s something that I hear all the time, and I think that you need to think what does it mean to participate in society these days? And often that means you have to be on social media for your job or perhaps to keep in touch with friends, things like that.
You have to have a cell phone. You have to go out into public with your vehicle, and at all of these points you are being surveilled. I think it can’t be understated enough that a lot of the surveillance that is happening is being collected by companies that are then repackaging it and selling it to governments.
A lot of the surveillance that is happening is being collected by companies that are then repackaging it and selling it to governments.
Jason Koebler
So very often the governments themselves are not doing the surveillance in the first place. This information might be collected to target you with advertisements or to like market to you, but then it can also be repackaged and sold to the government, as Beryl has mentioned, a few times.
But I think because it’s being collected in this commercial way, it’s not being protected in the same way that we would expect if it were being collected by the government, at least maybe in a prior era. And I think that it is a lot to ask people to completely reconfigure their lives, to avoid interacting with society so that they don’t create all of this data.
I think that there has to be some sort of happy medium where we can benefit in some way from technology without submitting to the surveillance state. It doesn’t seem, it doesn’t feel like it needs to be, don’t participate in society at all. Don’t use technology at all. Or willingly give all of your information over to these huge companies and the government.
CHAKRABARTI: So then again, Beryl, that brings us back to, historically in this country, the quote-unquote happy medium has been found through eventually coming up with a frame, an intelligent framework for regulation around how law enforcement can use these tools.
LIPTON: Absolutely. As Jason said, we can’t expect that every single individual who has so much on their plate already to make some of these trade-offs, and there are a lot of incredible benefits to technology.
You can talk to people on the other side of the world and that’s amazing. There’s so much positive that can come out of our technology use, but our government, our representatives, our companies, need to be held accountable, and they need to be doing better on our behalf. The individual can’t be told, you’re not going to be able to go to the doctor because the doctor’s office requires you to use an app to register for your appointment.
Our government, our representatives, our companies, need to be held accountable, and they need to be doing better on our behalf.
Beryl Lipton
And we might share that information or that information might get leaked. That’s unreasonable. We need to come up with appropriate safeguards that match the ways that we communicate at this point.
CHAKRABARTI: Okay. I do want to point out that even in this conversation in a hidden way, we talked about positive ways in which AI can be used for, I think, for surveillance and law enforcement, way back when we played that sort of amalgam of advertising material from several companies in this space.
There was one that I mentioned called Axon, which makes police body cameras. And there was an example of a product that they say allows officers to ask AI in real time whether something they want to do is within their department’s policy. I actually see that as a potentially positive use of AI here.
Because if an officer can say, I wish to do X, and the AI returns an answer in real time, that’s attached to a body cam, by the way, that no, that is against policy. There’s a record of an officer then choosing whether or not to follow their own department’s policy.
LIPTON: I think there are positive use cases, I would say, I would expect that our law enforcement officers would have a good idea of some of the policies, but right in the moment, you don’t necessarily know everything. When we’re talking specifically about Axon body cameras. I think it’s important to note that Axon has also used AI to generate police reports.
There was a report recently about how they captured the princess and the frog playing in the background, and in the report, it said that an officer had turned into a frog. That obviously does not make any sense. We see Axon is experimenting right now with real time face recognition on some of its body cameras, so that it can identify people as they’re being encountered by law enforcement on the street.
So I think that there are possible applications for AI, but we again, need some really clear boundaries. And again, we need consequences. And for the humans to be held accountable, they cannot just rely on artificial intelligence to tell them what their responsibilities are because then we’re going to be in this place where nobody can be held accountable.
CHAKRABARTI: Yeah. Okay. So Jason, again, I’m always thinking about what people on the other side of the radio or the other side of the podcast might be yelling back to their phones or their radios. And I imagine that there’s some people out there who are like, you guys are just the tin hat crew. Okay, you’re just a tinfoil hat crew and you’re paranoid about everything.
You go do your little chicken little bit off to the side while we regular, we normies live our regular lives. In response to that, Jason, I mean we are talking exclusively right now about surveillance in the U.S.
I would love for you to take a minute to talk about what’s happening and has been going on for a long time about surveillance, citizen surveillance in China and why China might stand as an example of how far these things could go.
KOEBLER: Yeah, I think that people will say, if you don’t have anything to hide, why should you worry about it? Or, I’m not doing anything wrong. Why should I worry about it? And many of those same people will then look at China and say, this is an authoritarian state that surveils their people, it tracks them on the internet, it censors their people, all that sort of thing.
And it is true that there is a huge surveillance apparatus in China. There’s CCTV cameras everywhere. There’s real time facial recognition, things like this. There’s this social credit system that has been misreported to some extent, but has been fear mongered here in the United States.
And we sit here in the U.S. and say, that is authoritarian. We don’t want that. And at the same time, we have built that exact same apparatus, and we have handed it to commercial companies that have gone town by town in the United States and sold their wares to city councils and local police, and then laddered it up from there.
And so we are building something that is very similar here in the United States where –
CHAKRABARTI: Jason, let me just jump in here because some folks may not have heard of China’s social credit system, which is, what, powered by basically decentralized digital network. What exactly is it?
Take a second to describe that.
KOEBLER: I know we’re running short on time. It was just this idea that if you missed a bill or if you jaywalked or something like this, it would go into your record of some sort. And maybe there would be consequences for you if you jaywalked in a specific town or if you committed some sort of crime, like maybe you wouldn’t be able to buy a plane ticket, maybe you wouldn’t be able to get on a train, things like that.
And the way that it actually worked in practice was much more similar to the way that credit in the United States already works. And I think the reason I brought it up is just, it was an example of something where people looked at something that was being built in China and said, this is a really authoritarian thing.
This is so different from what we have in the United States where we have freedom and where we are allowed to live a more free life without being surveilled in this sort of way. And this was a very viral story, maybe four or five years ago.
And over the last four or five years we have been giving government contracts to companies that are building things like real-time facial recognition technology, license plate tracking cameras that maybe wouldn’t be as problematic if they tracked you only in one town, but they are networked together in such a way that you can build a very detailed record of someone’s life.
As they move around the country, and I think that the big thing is that these systems all ladder into each other and police often have access to many of them, and they can build a very detailed portrait of your life using them.
The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.
This program aired on February 3, 2026.
Support WBUR
Support WBUR
Listen Live