top of page

Smarter not harder - using AI in digital investigations

  • 1 day ago
  • 6 min read

Policing in the UK is undergoing one of its most significant periods of transformation in decades. Crime types are evolving at pace, demand is rising across almost every area of policing, and investigators are managing unprecedented volumes of data. At the same time, the public expectation for faster, more effective investigations is growing. The efficiency of investigations increasingly depends not just on frontline activity, but on the speed and accuracy of administrative processes; everything from data requests to evidence triage to routine compliance checks. Officers spend substantial amounts of time on paperwork and manual data entry, squeezing time available for investigative work.


Over the last decade, traditional crime types have increasingly given way to digital and cyber-enabled offences [1]. Fraud, online child exploitation, organised cybercrime, technology-facilitated domestic abuse, and data driven financial crime now make up a large share of investigative caseloads, with around 90% of all crimes now with a digital element [2]. Investigating traditionally ‘offline’ offences can require access to mobile phones, social media, CCTV, cloud services, vehicle telemetry, wearable tech, or smart home devices. The question for investigators is no longer whether digital evidence exists, but how quickly and effectively it can be found, accessed and used.


Digital investigations in a complex legislative landscape

While digital evidence presents more opportunities to understand criminal activity, it also creates substantial administrative burden. Accuracy is critical as errors can delay investigations and risk non-compliance. This is precisely the type of workload where technology can relieve pressure without undermining the human expertise that drives good policing.

 

The Investigatory Powers Act (IP Act) regulates how authorities can access communications data – information about who contacted who, when, how, and from where. Previously associated with serious crime such as terrorism, homicide, and child sexual exploitation, communications data is now routinely used across a wide range of investigations, including in tackling many VAWG offences such as stalking, harassment and online exploitation. The IP Act was amended in 2024 to address what was described as “exceptional growth in the volume and type of data relating to people, objects, and locations across all sectors of society” [3]. These amendments broadened law enforcement’s ability to acquire wider data sources, including Internet Connection Records (ICRs), where it can be shown to be both necessary and proportionate to do so.  


This uplift has had two main consequences. First, there has been a significant increase in communications data usage with many more officers making decisions about communications data requests, many of whom have limited experience and confidence in navigating the process. Second, there has been a rise in the cognitive load felt by investigators. They must now determine whether a case meets the legal thresholds, which identifiers (phone numbers, emails, devices) are relevant, what types of data is available from different service providers to answer key investigative questions, and how to articulate necessity, proportionality and potential collateral intrusion. This is not simple administrative work; it is nuanced, legally constrained, and easy to get wrong. The uplift has unintentionally increased pressure on officers, SPOCs and authorising officers and has, in some ways, made the job harder.


Artificial Intelligence tools, already deployed to alleviate many manual processes in policing, can play a key role in supporting officers in navigating the IP Act. We began with the question:


“How do we get from an officer asking a straightforward investigative question, in plain language, to a well-constructed communications data application, without requiring deep specialist knowledge?”


To explore this question, we developed an AI-powered Investigatory Powers Assistant, designed to support applicants directly at point of use. The Assistant was developed not to replace human judgement but to draw on AI’s strength in structured, repeatable tasks, and improve the speed and accuracy of administrative work, while leaving the officer in control – the ‘human in the loop’.


Building the Investigatory Powers Assistant

To develop the tool, we experimented with a range of approaches using Large Language Models (LLM) and generative AI systems against a synthetic dataset of realistic investigative crime scenarios. These included both concise and detailed crime reports, complete with typos and imperfect formatting to reflect real world material.


We took a modular approach, building a workflow with a simple use interface to guide the applicant through each stage, which enabled us to test different AI tools (with an example workflow using Python and OpenAI ChatGPT shown below) at each stage of the process, swapping in and out to test how each performed.



Stage 1.      Understanding the crime scenario

We used LLMs to extract and classify the crime type and provide up to three suggestions for the user to confirm or override. The model provided a recommendation on whether the case constituted a threat to life or met the serious crime threshold – both essential elements in communications data legislation. 


To develop the classification engine, we trained a general model using pre-classified crime scenarios, enabling it to output confidence scores and support decisions using a structured decision tree. We refined prompts and implemented guardrails to reduce hallucination and improve consistency. If the model couldn’t determine that a crime had been committed, it explicitly flagged this to the user for review.


Stage 2.      Extracting the right identifiers

We combined Python packages to extract relevant identifiers (telephone numbers, email addresses, device IDs etc.) from the report, then used a second LLM to recommend the most appropriate data request based on the context in the crime report and the identifier provided. Using generative AI capability, an initial justification around necessity, proportionality and collateral intrusion was then drafted. These are key legal concepts that applicants often find challenging to get right and guidance is often needed.


Stage 3.      Generating a draft application

Finally, the Assistant guided the user through the creation of a draft application, pre-populated with extracted information. Each application remained fully editable and was presented for human review before submission for authorisation.



What we learned

Through our experiment, several key insights emerged about where AI delivers real value, and where caution and structure are critical. The most reliable results came from combining AI with more traditional data processing techniques, such as identifier extraction and automated form population. At each stage we considered both whether we could use AI to assist in the task, but also whether we should – in terms of understanding the risks involved and whether it was, in fact, the best tool for the job.


We found that guardrails mattered just as much as capability. Off-the-shelf tools were powerful, but too inconsistent when left unconstrained. Clear prompts, carefully designed workflows and transparent “I’m not sure” behaviours were essential to reduce hallucinations and ensure that the Assistant supported users rather than steering them. In many ways, the success of the prototype was determined less by the model itself and more by the structure wrapped around it.

Most importantly, the work reinforced that AI should enhance human judgement, not replace it. The Assistant helped navigate unfamiliar processes and draft clearer, more defensible applications, but it never removed the need for someone to think critically about the information in front of them. When used thoughtfully, our AI Assistant could boost productivity and confidence for both applicants and authorisers, reduce rework, and make it more likely that applications are correct the first time – ultimately helping investigations move faster without compromising on quality.


Policing in the UK operates with the consent of the public, and that consent depends on trust and any technology deployed must meet a higher standard than in many other sectors. Introducing AI into investigative processes is not just a technical decision, it requires strong ethical foundations and thoughtful design to ensure that tools support, rather than replace, the judgement of trained officers.


Ensuring fairness is equally vital and testing is key to ensure that AI tools do not introduce or reinforce bias or produce unequal outcomes. The public should be able to understand how these tools are used and why, with transparent oversight so decisions can be explained and reviewed when needed. Deploying without this accountability risks undermining public trust in policing more broadly. Just as our application for communications data must demonstrate necessity and proportionality throughout, AI tools should be deployed in policing against these same principles. Ethical design shouldn’t be an afterthought, it is the foundation that keeps the whole endeavour legitimate.


With these foundations in place, AI can become a practical tool for improving the efficiency and accuracy of investigative work. The 2024 uplift to the IP Act expanded what data police can access but also expanded what they need to understand. Workloads have increased, decision making has become more complex and errors carry consequences. Carefully designed, ethically deployed AI can help manage this complexity by reducing administrative burden, improving accuracy and supporting officers through routine but cognitively demanding tasks. AI may change how investigations are carried out, but it cannot replace the human judgement and compassion that define good policing – and nor should it.

 


[1] Office for National Statistics (ONS), released 29 January 2026, ONS website, statistical bulletin, Crime in England and Wales: year ending September 2025 Crime in England and Wales - Office for National Statistics

[2] Association of Police and Crime Commissioners (2020), National Policing Digital Strategy 2020-2030

[3] Investigatory Powers (Amendment) Bill: impact assessment. The Home Office. January 2024. Investigatory Powers (Amendment) Bill: impact assessments - GOV.UK

 
 
 

Comments


bottom of page