10:30 pm
there’s kind of like two worlds you can live in here. there’s one world where the systems are controlled by somebody, and there’s a world in which they’re not. and we don’t know what’s going on. is it a bear? is it a werebear? or is it a werewolf? it’s more a werewolf. we can call it a werebear. we can call it a were-bbc-bear. welcome to ai decoded. on the programme this week, the bbc’s ai correspondent marc cieslak. the researcher and author of technology is not neutral,
10:31 pm
dr stephanie hare, and the entrepreneur and ceo of the ai start-up conjecture, connor leahy. welcome to you all. we’ve been trying to show our viewers, in recent weeks, some practical uses of how ai is being applied to advance the technologies we already use. and this week, to start the pro…
10:30 pm
there’s kind of like two worlds you can live in here. there’s one world where the systems are controlled by somebody, and there’s a world in which they’re not. and we don’t know what’s going on. is it a bear? is it a werebear? or is it a werewolf? it’s more a werewolf. we can call it a werebear. we can call it a were-bbc-bear. welcome to ai decoded. on the programme this week, the bbc’s ai correspondent marc cieslak. the researcher and author of technology is not neutral,
10:31 pm
dr stephanie hare, and the entrepreneur and ceo of the ai start-up conjecture, connor leahy. welcome to you all. we’ve been trying to show our viewers, in recent weeks, some practical uses of how ai is being applied to advance the technologies we already use. and this week, to start the programme, we’re going to focus on drones. now, we’ve seen, in ukraine, how integral they’ve become to the battlefield, to modern warfare. but they’re increasingly important to disaster response to agriculture and preparedness, to delivery and logistics. and we’re also fully aware how quickly that technology is advancing. but the new generation of drones isn’t just about better cameras, longer flights, it’s about a different kind of intelligence altogether and one that could change how we control them, or maybe don’t control them at all. and it’s marc that’s been out to see this new technology. so, marc, what have you discovered? yeah. if you think about how we actually control
10:32 pm
uavs or drones or quadcopters at the moment, they’ve actually got quite a bit of assistance, a bit of software assistance in there, already helping you to fly them. if you fly a drone normally without any of that assistance, it’s quite difficult. so there’s a little bit of software, there’s a little bit of algorithmic help to flying a drone. now, what i’ve seen out in essex is actually the next level of that algorithmic assistance, if you like. so, marc, for someone like me who’s lost his son’s drone in a field far, far away, unable to master the piloting of these things, this is some relief, i can tell you. but what do you think are the most promising real-world uses of voice operated drones beyond the battlefield? yeah. for instance, when it comes to rescue, search and rescue operations, drones are already used quite extensively in search and rescue operations in remote locations. it’s a lot easier to get a drone,
10:33 pm
a lot easier and faster to get a drone up a mountain in quite inclement conditions than it is to get human beings up there, and you can identify whether somebody’s injured, whether they need additional help, or how quickly you need to get to them, or just find a person that’s missing or injured. so there’s quite a few applications beyond the battlefield, as it were, for this kind of technology. bravo one, take off and search the perimeter, letting me know if you spot any objects of interest. marc: at a test facility in essex, a new method of controlling drones using ai is being trialled. track the car you’ve found. drones have had a dramatic effect in conflict across the globe. controlled remotely, drone pilots often require quite a bit of skill and quite a lot of training. but what if soldiers could tell a drone what mission they want it to perform?
10:34 pm
and by “tell”, i mean talk to it. bravo two, search the area. ba systems have developed technology for piloting military drones, which does away with conventional controls like joysticks. instead, the drone operator tells it what to do using natural, almost conversational language. the test location is designed there’s a lake and a number of trees tasked with missions like locating hostile forces. because these drones don’t need any specialist training to fly them, i can actually give them a go. i’ve got a headset with a microphone on it and a the tablet here. i tell it what i want it to do. so i hit the microphone and say, bravo two, take off and check 20m south of the lake
10:35 pm
for any human activity. and it’s getting ready to head off. and there it goes. and we’ve left a casualty for it to find by the side of the lake. so let’s see if it manages to identify a human being lying on the ground. the drone is controlled by a specially adapted ai large language model, which interprets human speech and turns them into commands. it quickly rewrites its instructions as lines of code, which in turn control the drone’s actions. something that’s quite interesting looking at the controller is that i’ve been having a conversation with that drone. it’s actually a two-way conversation. i’ve given it instruction, but it’s asked a question about the instructions i’ve given it. it says, “when you say, ‘human casualties,’ “do you mean looking specifically “for people lying on the ground?” and that demonstrates the amount of reasoning that’s going on that controls the drone’s actions.
10:36 pm
after a search of the area, it does indeed manage to find the casualty lying on the ground and reports back to me. this project goes beyond ai. in war zones like ukraine, drones of all sizes have quickly become deadly weapons, pushing militaries worldwide to develop them faster and cheaper. the majority of drones are built from what we call consumer off-the-shelf, so there’s no point in reinventing the wheel. we use these to rapidly build these prototypes. you can build one of those for £500 or less. and how long has it taken to develop this particular project from start to finish? start to finish, nine months. that sounds very fast. yeah. the drone’s large language model is built on open source technology, meaning some of its code is publicly available. even though it’s been adapted, could this pose a cyber security risk? in the era of ai, things move quite fast,
10:37 pm
and your best bet is to be able to assess the capability of the new ai advancements as fast as you can while the guardrails and the regulations around the use of ai is developed. in our context, there’s always a human in the loop. but some industry observers think there’s an ethical risk to developing at speed. in desperation, countries will do an awful lot in order to survive. so we’re trying to produce weapons, produce them at wartime speed, but trying to have peacetime ethical standards. and there is...a tension there. and something will inevitably give. fly around the compound of interest at a radius of 30m. so far this tech is research acting as a proof of concept. it has application beyond the military and could very well change the way drones take to the skies. marc cieslak, bbc news.
10:38 pm
christian: i think you can see that it’s going to make warfare faster, certainly faster, more efficient, perhaps, increasingly unpredictable. just on the ethics and the legal concerns, connor, who is responsible if an ai controlled drone misinterprets an instruction, is it the programmer? large scale deployment of drones means that, in a sense, physical space becomes computable. in a sense, you can now write programs about what should happen in a physical environment. so in a battlefield, it could be you can write a program, “kill all combatants within this space.” that is a program you can now execute on a computer that wasn’t possible before drones. you can now, but also, for example, in civilian or industrial environments, you can now have programs such as, “survey every citizen of this city. every single one. “and transcribe all of their information. “run it all through an llm and find the top 100 “most seditious ones, and send me a picture of their face.” this is a thing that, you know,
10:39 pm
in the 1980s or something, the stasi tried to do. they would have loved to do this. they tried as hard as they possibly could to do this, but the technology simply wasn’t there yet. but now, computing power is now there. it is now absolutely possible to surveil, you know, every single person’s personal communications and crunch that with a large supercomputer. totally doable. what drones are allowing us to do is to have the physical component. now you can also collect the data, you can collect the data, and you can execute kinetic or physical actions based upon this. and i think this creates this large level of... ..power concentration. it is now possible for a very, very small number of people to make decisions that influence much, much larger spaces without having larger amounts of buy in. so even if a leader were to give a disastrous command to a group of soldiers. there is still at least the possibility that the group of soldiers might be like, “wait, we’re not going to do that. “like, that’s actually crazy.” but in this scenario, you’re painting... in the scenario you’re painting, where our relationship with
10:40 pm
machines changes more broadly, so that we have to trust machines to make life and death judgments because we’ve handed over part control to them. i mean, if we are saying, “go out there and surveil all soldiers on the battlefield,” within that structure, presumably there are some drones speaking to other drones, it becomes agentic. yes. i think this is a huge problem. there’s this deep problem where there’s kind of like two worlds you can live in here. there’s one world where these systems are controlled by somebody, and there’s a world in which they’re not. and we don’t know what’s going on. where i think this is a very likely world that we’re coming into where the amount of data being processed, the amount of decisions being made at time speeds that humans can’t keep up. it’s not even that there’s a bad guy somewhere who’s making a bad choice. it’s that no-one’s making a choice. you know, the systems are just reacting in ways that we not necessarily can predict. and, you know, this is already starting to happen in, you know, small and medium scales. and as this technology becomes more widely deployed, more capable, more long lasting, the worst case scenario becomes exponentially worse.
10:41 pm
ok. well, listen, since we’ve already touched on disaster response, i want to focus for a short period on the situation in jamaica and how it is being reported, because our social feeds, no doubt you’ve all discovered, have been inundated this past week, not just with images of devastation, but with ai generated fakes. came up on the road. everything is underwater. the waves keep crashing through the hotels. look at that. look, look, that is a shark! that is a shark right there in the street. lord, please. they’re swimming past the cars. oh, my god, the whole ocean’s on the street! this is jamaica right now. the water came all the way up to the hotel. look at this. that’s a shark. oh, no, there’s another one over there! i can’t believe it. the sea’s just pouring in. everything’s under. the pool’s gone. it’s a reminder, stephanie, that in the age of synthetic media, every disaster story carries two storms the real one on the ground that we’re tracking and the digital one that blows up online. i know, and i sometimes think like, what’s it for, right?
10:42 pm
is this just to demonstrate a capacity for entertainment? people find it funny. it reminds them of a hollywood movie, usually a bad one. you know, always involves sharks in urban settings. so it’s almost like surrealist performance art. so there’s just people are doing this to mess around and maybe not understanding that they’re polluting an information ecosystem at a time when the people in that emergency really need to be able to get good, reliable, verifiable news. so that’s one. it’s sort of weird as to why we do it. the second thing is how do you help people to understand as this starts to get more and more realistic looking, what is real and what is not? and we’re getting to a point of trusted information. who do you trust to deliver you your news? right? it’s a technical question, but it’s also going to be back to agencies. i don’t know if people even will be looking at track records of who has proven themselves to be most reliable, or if it’s are you watching things that you want to believe are true? right? it’s there’s a psychological element here
10:43 pm
that’s very unexplored. mm-hm. you’re nodding, connor. there’s almost a race here, isn’t there? i mean the danger is that the ai generated visuals fill the vacuum before that verified information emerges. the fundamental problem is the attention economy. all these so-called social networks are optimised to monetise your attention. that is their goal. they have no intention of informing you, of helping you, of improving your life. if that happens, sure, they’re happy about it, but that’s not what they’re built for, that’s not how they make money. the way they make money is by making you click, by making you pay attention. and so there’s this massive ecosystem, this, like, symbiotic ecosystem of people who try to tickle every second of attention they can out of the algorithm. they will say anything, do anything, post anything as long as it gets clicks, as long as it gets attention. and so in a sense, it kind of like really goes into like whatever is the most manipulable part of the human psyche.
10:44 pm
like, this is exactly what will be triggered. this is exactly what will be gone for. so really it’s the incentives of how these systems are set up. you could build a social network whose main goal is to inform people and provide accurate and helpful information. there is currently zero social networks where that is their main business goal. increasingly, what’s happened in the last couple of years is that it’s now expected, there is an expectation that when there is any major natural disaster, when there’s any major weather event, when there’s any major conflicts, that we’ll see content of this nature, appear in social media. that we’ll see synthetic, ai created images which simply aren’t real. that will pop up for whatever reason, whether it’s for clicks, whether it’s to tap into the attention economy. all of these kind of things are really quite worrying and pollute the well, as it were. they completely pollute the information environment and mean that when viewers look at their social media feeds, they’re questioning or they have to question
10:45 pm
every single thing that they see. yeah, i’m going to come back to that in a second. but there is a flip side to this, marc, where it can actually, for people like you, for newsmakers like you and i, it can become a tool in these situations to put audiences inside the experience. as newsmakers, we have to be really, really careful about how we use ai. we’ve got very strict rules which govern what we can and can’t do, and effectively, we can’t really use ai to do an awful lot of stuff when it comes to making the news, because it has huge issues or it creates or generates huge issues around trust. how is the audience supposed to trust us if the images that they’re looking at are synthetic and not real? so we go to great lengths as a broadcaster to make sure that the content that we’re delivering is 100% authentic and it is 100% real. i’ve been experimenting with the technology, and i’ve been experimenting with it for quite some time just to stress test it, to see where its strengths lie and where you can see weaknesses. i’ve got an example of some of the experiments
10:46 pm
that i’ve been making right here. in the age of ai, every single natural disaster... clicks fingers ..conflict... clicks fingers ..or significant global event... clicks fingers ..puts social media users on their guard. there’s a real power to that for storytellers. stephanie, i mean, ai generated reconstructions can peel back the science of storms. they can show what climate change looks like at a human level. are there some limits, do you think that are needed, if you’re going to serve the truth and not trivialise or misrepresent what’s happening? i think with trust building, being really transparent about when you’re using ai is probably a good first starting point. so for instance, we’ve just blown past the 1.5 degrees celsius target on climate change and everybody’s very worried about it. but for a lot of people, that’s just a number, and it’s very meaningless to how that would affect their lives. so if you were an effective climate journalist,
10:47 pm
you might, for instance, use ai, while telling viewers that’s what you’re doing, to say, “this is what’s going to happen, for instance, “here in the city of london, as we start to hit “two degrees, three degrees, “here’s how we’re going to get floods, “we’re going to get extreme weather. “at what point do all of “the world’s coastal cities just get flooded?” right? so you could use you could use ai to help do scenarios and simulations that help people better visualise the consequences of certain factors. i read a really beautiful story about teachers using it to help little kids visualise what they might become when they grow up. so a little kid suggests, i’d like to be an astronaut one day. and using ai, they were able to they were able to age the child and show them being an astronaut. and this blew the kids minds. they could they could really see themselves in this way. so i think as long as it’s a tool that’s being really explained and used and how it’s generated and what the limits are,
10:48 pm
it’s a really powerful storytelling tool. but there are... yeah, i think you’re right. ..many wrong, too. i only... i think you’re right, i only worry that, you know, in some cash strapped newsrooms, they’re going to use these kind of reconstructions to elevate their coverage because it looks so good, it looks so slick, in place of the on the ground reporting that lends the credibility, which is what we’ve been doing for years. but again, that comes back to the trust factor. on that issue, connor, i do just want to come back to the fakes. you say that it is possible using ai now to flag or remove the fakes that we’re seeing, but that’s not good business sense for the companies that are trying to lure us in. but do you think it is possible, becoming increasingly possible to use ai technology to get rid of it? so i think it’s actually not possible, not with ai alone. right. i think the problem is much more a problem of the business incentives. currently, it’s not... there is no current technological solution that can tell whether something is definitely true or definitely false.
10:49 pm
but there are political and, you know, regulations and say, reputation driven solutions. for example, i do trust the bbc will actually not feed me ai slop. the reason i trust the bbc to do this is not because of some technology. it’s because i trust the institution of the bbc to deliver on this promise. so in a large sense, this is also a cultural problem or an institutional problem. these companies that are running these... ..these social media networks have no institution that would prevent them from serving me lies because it benefits them. why would they even stop the lies even if they could? so also there’s very little work or very little money going into questions of like, how would you prevent these things from spreading? how would you prevent these kinds of things? but i do want to give a little bit of, maybe go a little bit easier here on them and say, this is a very hard problem. you know, these kinds of synthetic media are getting really, really, really good. you know, sometimes in a sense,
10:50 pm
where almost i feel like we’re reaching kind of an uncanny valley, kind of like inflection point where i think people now still believe that some media can’t be faked. that can, in fact, be faked now. so, for example, when people don’t see a sora watermark or a blurring, people are sometimes feeling too safe with that because there are other providers of such systems that do no watermarking whatsoever. so, are you saying then, that audiences have to assume that everything online is now synthetic, unless it is proven otherwise? yes. this is absolutely the case. we have reached that level quite a while ago. there is no thing you can do to guarantee 100% that something is real other than trusting the source. ok, well, that leads us very nicely to the other thing that marc has been doing this week, which is more trick than treat. it’s halloween, as you know, so we set marc a challenge this week of producing his own x-rated horror movie using the latest ai technologies.
10:51 pm
i’m about to show you the world premiere of nightmare on the district line, which to be honest, sounds like my regular commute home anyway. but why don’t you tell us, marc, before we see it, what you set out to do and what tools you’ve used? well, my movie making skills have very much been tested this week. this is heavily inspired by a film that i watched in the 1980s. see if you can guess what it is. but part of this experiment was to see what this particular model, which is veo 3 from google, what it would and wouldn’t do. would it allow me to create something that has the potential to be quite scary? would it allow me to create something that has a lot of the elements of a horror movie? all of the tension and things that you expect to see in a horror movie? will it let me do any of those things? i will let you be the judge of that. so you are the audience. please, now sit back, relax, get yourself some popcorn. and if the person in the projection booth can
10:52 pm
just roll the film, off we go. and, let me know what you think. women scream i’m gonna see what that was. is anybody with me? guess it’s just me on my own, then. she breathes heavily beast snarls i don’t know what i’m more disturbed by, the idea that you can create all that from text, that nobody in it is actually real, or the idea that something so dark is lingering in the back of marc’s psyche. maybe we’ll help him with that after the show. but what does this tell us, marc, seriously, about the power of ai video tools like veo 3?
10:53 pm
i think it’s veo 3 you used. i mean, we are seeing here the beginning... are we seeing here the beginning of a world where you can direct something just by typing? for me, this demonstrated quite a lot of the limitations. while it was quite fun to make, it was really, really interesting to experiment with. there were huge limitations when it came to trying to make a film like this one. maybe one in every ten generation was actually usable in terms of... i’d ask it to do a particular shot or make a shot look a particular way, and it wouldn’t do it or it did wrong in such a way that it wasn’t usable. so there’s still some big limitations. how long did it take you? how long did that take you? about a good morning, really. that took about a morning. and then i edited it as well. so an afternoon to edit it. but the biggest limitation, the biggest limitation isn’t camera angles or things of that nature or doing anything weird with physics. the biggest limitation is performance. now, if you’ve ever done anything with actors, i don’t know if you’ve ever appeared in panto, christian, or anything of that nature,
10:54 pm
but if you’ve ever done anything with actors, you direct them, you get a performance from them. and trying to tease a performance from the ai was like herding cats. it’s really, really difficult. so not quite ready for prime time yet. some say my whole career is like panto, marc, but we’ll gloss over that. he laughs i mean, how advanced is this, connor, compared with the earlier video models like sora or runway? yeah. i mean, it’s extremely advanced for sure compared to, say, a year ago. i think sora and runway are definitely catching up. but, yeah, i mean, as we could see from the video, it’s like pretty convincing on many angles. sure, there’s, you know, a lot of things are not quite consistent. some things, you know... but, like, look, i’ve watched way worse horror movies than this, you know? so, it’s really getting there. and the fact you can do this in an afternoon or two with, you know, a shoestring budget will change how we create art. you know, i’ve done this before, too, for some of my art projects and so on.
10:55 pm
and it’s, you know, it’s fun, it’s exciting, it’s interesting. can’t deny that. is it a bear? is it a werebear or is it a werewolf? it’s more werewolf. we can call it a werebear. we can call it a were-bbc-bear. yeah, yeah. it was more bear, i think, than wolf. but maybe that’s the limitations of the program, i don’t know. i’m going to tease what we’re doing next week. we’re going to look, stephanie, at this idea of what you can create now with ai and how you can take it out of the hands of the professionals and do something like this. and we’re going to do it with stephen fry. so tune in next week for that. anyway, that is it for ai decoded this week. thanks to marc, thanks to stephanie, thanks to connor, as ever. i do have one important announcement to make before we go. from this week, we are back on youtube. since we reformatted the show, not all the episodes were fully uploaded. that changes today, so they will in future be there on the bbc youtube channel. the back catalogue will be there too. so, in fact, somebody told me last week
10:56 pm
that they listened to it as a podcast on their way into work, which is heart-warming. so if you have missed part of this show or you do want to watch it again, do take a look at the bbc youtube site. we’ll see you again next week. thanks for watching.
10:57 pm
10:58 pm
10:59 pm
11:00 pm
will live from washington. this is bbc news. a 32-year-old man is now the only suspect in a multiple stabbing attack on a train between doncaster and london. one person remains in a life-threatening condition. the death toll in jamaica from hurricane melissa rises to 28 - with people reporting desperate conditions on the ground. the uk government says it’s “working to remove” andrew mountbatten windsor’s last honorary military title, after he was stripped of the title of prince. hello, i’m clare richardson.
11:01 pm
a 32-year-old man from peterborough, is now the only
0 Views
info Stream Only
Uploaded by TV Archive on November 2, 2025