
As early as 2024, human rights advocates warned: delegating police report writing to generative AI could have serious consequences for criminal justice. By 2025, these concerns were no longer theoretical. AI spread faster than society could grasp its risks and legislators could respond.
The reason is simple — and alarming: the most popular AI tool for police reports, Axon Draft One, is owned by Axon, the largest U.S. provider of police body cameras. Cameras, software, and AI are sold as a bundle. This model encourages unnecessary technology purchases and effectively prevents communities from saying “no.”
2025 brought both positive and negative developments. But the negative were systemic.
The Problem: Opaque Technology at the Heart of Justice
AI reports are not just new or imperfect. They are opaque, unverifiable, and potentially dangerous. Police reports often form the basis for arrests, charges, and deprivation of liberty.
King County (Washington) prosecutors banned the use of AI in police reports, stating: “We do not fear technological progress, but we have reasonable doubts about current products,” rejecting AI-generated narratives. Prosecutors recognized such technology is incompatible with fair process requirements.
Axon Draft One: Designed Against Accountability
Analysis revealed that Axon Draft One is deliberately structured to prevent review.
Process:
This makes it impossible to determine which parts were AI-generated and which were human-written or edited. If contradictions arise in court, officers can claim, “AI wrote this,” with no way to prove otherwise.
Axon admitted this design prevents “disclosure issues” for police and prosecutors — convenient for law enforcement, not justice.
Invisible AI and the Powerlessness of Public Oversight
The public often doesn’t even know AI is used in reports. Requests for public records are nearly useless, as there is no trace of AI use. Advocates in 2025 had to create guides for precise requests to shed light on AI report usage.
The result: a dangerous asymmetry. Police and vendors know everything; the public knows almost nothing.
Good News: Regulatory Pushback Begins
Two states — Utah and California — passed laws limiting AI use in police reports.
California became the first state to legislate that AI traceability is essential for fair process.
For Ukraine, the experience of the United States in 2025 is highly instructive. Since 2022, Ukrainian law enforcement agencies and city administrations have been actively implementing digital security systems: surveillance cameras, traffic analytics, automated license plate recognition, and “smart” platforms for the police. At the same time, the use of generative AI to draft police reports is no longer a futuristic concept — the risks it poses are becoming real.
As in the U.S., decisions to implement such technologies in Ukraine are often made administratively, without broad public discussion. This creates a situation where key decisions in criminal procedures become “technical” rather than political — effectively removing them from public oversight. Opaque AI-generated reports can result in courts relying on documents that cannot be verified, traced back to the algorithm’s contribution, or checked for errors.
This issue has several dimensions for Ukraine. First, without proper regulation, there is a risk of human rights violations — including personal liberty, the right to defense, and the right to a fair trial. Second, the lack of transparent audit and oversight procedures creates a dangerous asymmetry: the police and technology providers know everything, while the public knows almost nothing. Third, dependence on a single vendor and closed algorithms can lead to loss of control over data, including audio and video used as evidence.
The U.S. experience shows that even without new laws, communities can mitigate risks through public pressure, transparent political decisions, and audits. For Ukrainian cities and the state, this is a clear signal: transparent procurement, independent audits, public participation in policy-making, and oversight of AI in law enforcement are urgently needed.
Conclusion: the issue of AI in police reports is not merely technological — it is political and legal. Ukraine can prevent a repeat of the American risks if it now establishes transparent frameworks, control mechanisms, and accountability, turning AI into a tool for safety rather than a mechanism of hidden surveillance.
More states are expected to regulate or ban AI-generated police reports. Police reports are not drafts or internal notes; they trigger criminal prosecution. Delegating their creation to opaque algorithms is a systemic risk, not innovation.
2025 proved: AI in policing is not a matter of convenience — it’s about accountability, human rights, and trust in justice. Without clear limits now, technology vendors, not citizens, will set the boundaries tomorrow.
Subscribe to our channels on social networks:
Contact us: business@avitar.legal
Violetta Loseva
,