Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust.
Facepalm: Ever since police departments across the US began testing Axon’s AI-generated police report drafts, legal experts and activists have raised concerns regarding hallucinations, a persistent issue with generative AI. While officers maintain that the software is accurate, one test revealed an amusing flaw.
A recent Utah police report falsely stated that one of its officers had transformed into a frog. The error was a striking example of AI-generated hallucination that has crept into police work, as departments increas…
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust.
Facepalm: Ever since police departments across the US began testing Axon’s AI-generated police report drafts, legal experts and activists have raised concerns regarding hallucinations, a persistent issue with generative AI. While officers maintain that the software is accurate, one test revealed an amusing flaw.
A recent Utah police report falsely stated that one of its officers had transformed into a frog. The error was a striking example of AI-generated hallucination that has crept into police work, as departments increase their reliance on the technology.
The police department of Heber City is one of many across the country that have begun testing and adopting AI transcription software to save time writing incident reports over the past few years. The incident with the frog involved Draft One, a tool from body camera maker Axon.
Draft One transcribes audio from body cameras in English and Spanish, generates narratives using OpenAI’s GPT-4 LLM, and automatically uploads them to cloud servers when officers stop recording. Officers then review the drafts, add missing information, and manually submit the finished reports.
Police departments claim that Draft One does not hallucinate like ChatGPT because they adjusted it to output only what it hears. However, the software wrote that an officer transformed into a frog because it overheard the film The Princess and the Frog playing in the background.
While the officer editing the report caught the error before submitting it, the incident indicates that Draft One struggles to distinguish between human speech and pre-recorded audio. Activists, legal experts, and prosecutors also worry that AI might alter critical materials, reflect societal biases, and lead to officer complacency. The frog incident illustrates the risks of departments becoming overly dependent on AI.
Officers claim that the software is accurate, can detect details humans might miss, and cuts the time required to write reports by more than half. One Heber officer said that another tool the department is testing, Code Four, saves him between six and eight hours per week.
Founded by two MIT dropouts, Code Four provides AI-powered report drafting and video redaction tools. However, unlike Draft One, which only records audio, Code Four also includes details analyzed from video footage. The accuracy of video analysis, which remains to be seen, could introduce more legal challenges if AI-generated police reports enter courtrooms.