The digital realities at the heart of the New Model for Policing

The digital realities at the heart of the New Model for Policing
Described by Home Secretary Shabana Mahmood as “the biggest shake up of policing since the service was created two hundred years ago”, the Government has now published its policing white paper. It lays out an ambitious and wide ranging programme of reform that aims to make policing:
- more closely connected to local communities and more responsive to their needs,
- more joined up in how the service is structured,
- more reliable in delivering consistently high standards,
- and more effective through its people, technology, use of data, and capacity to prevent crime.
Crucially, it acknowledges a modern challenge to traditional policing: almost nine in ten of all reported crimes now involve a digital element.
The paper honestly asserts that criminals have been exploiting technology with far too much ease and that policing must now catch up through an innovative mix of AI, data and digital tools.
The Scrumconnect team has read the white paper closely and analysed it through the lens of our vast experience of delivering large-scale digital change in complex public services. Through this, we have identified five core takeaways that we believe will define the reality of delivering the digital aims expressed in the whitepaper. Our analysis reflects where similar reforms have succeeded, where they’ve stalled, and the conditions that need to be in place for this transformed way of policing to work in practice.
1. Policing reform means data reform too
We believe the paper is best read as a digital and data reform programme that happens to be driven by organisational change. Our interpretation is that the document is fundamentally concerned with whether policing can operate as a coherent digital system in a world where crime has already become ‘digital by default’.
In our experience, from delivering large-scale data and AI programmes across justice, welfare and education, we rarely see a lack of ambition to be the limiting factor. Instead, it is almost always being able to find and identify quality data and then moving it reliably, quickly and consistently across organisational boundaries.
The white paper reflects that reality more clearly than most reform documents we see. The proposal is to shift policing from a federation of semi-autonomous organisations towards a nationally enabled operating model, underpinned by shared data, platforms and standards.
That is a profound and important change. It is also where the greatest benefits, and the greatest delivery risks, sit.
2. Data fragmentation is a structural constraint
We share the paper’s view that data fragmentation represents a structural weakness rather than a day-to-day operational issue. Decades of federated procurement have produced duplicated systems, incompatible data models, and uneven digital maturities.
This isn’t an unusual challenge, or even one that is unique to the policing sector, it’s a situation that the vast majority of our clients face at the outset of their digital projects.
If left untackled then insight becomes slow, partial, unsafe and contested. With this in mind, the proposal to introduce mandatory national standards for data and technology is, in our view, the most important digital commitment in the entire paper. Without common data models, identifiers and quality thresholds, interoperability remains aspirational. And without interoperability, the paper’s wider ambitions around AI, facial recognition, analytics and national intelligence will be simply underdeliverable.
We have repeatedly seen this pattern. Where standards are optional, they are interpreted locally. Where they are enforced, delivery accelerates. With this in mind, we would suggest a delivery model where data standards are treated as core infrastructure, not guidance, and to invest early in the unglamorous work of data quality, stewardship and integration. In other words, front-load all of the risk so that it’s done, out of the way, and doesn’t block later progress.
At the same time, don’t let this risk-first approach to data standards lead to an underestimated delivery challenge. Retrofitting national standards onto legacy systems is slow, complex and organisationally difficult. It is a common pitfall we have seen many times before: believing that transitional arrangements unblocks progress towards rapid benefits. In our experience, sustained momentum requires a level of honesty that complex data reform is a multi-year endeavour, not a quick fix to a day-to-day operational issue.

3. Local to national requires interoperability
The white paper discusses a ‘local to national’ transformation of how policing operates, in addition to ambitious plans around AI, facial recognition and data-driven policing. To achieve these goals, there needs to be a strong focus on interoperability. The white paper tackles this by proposing a unified National Police Service (NPS) that will, among other things, consolidate the various IT and digital services that are duplicated across the various forces.
In practical terms, this means that common platforms need to be created for things like case management, intelligence sharing, and communications. It will allow all officers - regardless of which force they serve in - to access the same core systems and data that officers in other forces do. From a functional perspective, this isn’t a difficult ask. Think of it as visiting a high street bank branch. The teller serving you would have immediate access to your account data by interacting with the same central banking system that a teller working in another branch would be using at the same time to serve their clients
We have a vast experience of delivering common platforms, most notably for HM Courts and Tribunals Service. The benefit of common platforms is that they drive uniform data standards. We can expect to see commonality around the ways that crimes, incidents, and people are recorded, ensuring that the data remains consistent and machine-readable across the country.
From a digital transformation perspective, this centralised approach can be a game-changer for the entire policing sector. It enables economies of scale with savings being invested into frontline policing. It also eliminates imbalance where larger forces, who are often better funded than the smaller ones, enjoy access to cutting-edge tools because they can afford the investment.
That said, it will be a considerable project with vast and fragmented estates of legacy, and where existing ways of working will be assessed and challenged. Legacy systems can’t be ripped out overnight; interfaces and data pipelines will be needed to bridge old and new during the transition. Our experience of delivering these challenging programmes of work suggests that a prioritised incrementally-based integration that is user-centered is more likely to succeed than a ‘one and done’ big-bang overhaul. The justification of this is clear: the forces will need to be brought along through the transformation, users sheepdipped and trained, and policies challenged so that they’re lawful.
The result will be a shared digital backbone that will unlock efficiency and capability gains. But the journey to interoperability isn’t strictly a technical challenge. It will require collaborative leadership, robust programme management, and flexibility to get there.
4. A combo of AI and facial recognition, a digital bobby on the beat
One of the most ambitious and important objectives in the white paper are the implementation of AI and facial recognition technology. We believe that one of the more important aspects of this isn’t the scale of the technology being proposed, but the role that it is intended to play.
Clearly there will be relevant AI use cases for many of the police’s challenges, like automating digital forensics, or applying predictive analytics for intelligence around crime hotspots or resourcing requirements. But while it is easy to frame this as a move towards more automated or surveillance-led policing, our reading of the strategy is for something more subtle: AI as a form of digital support embedded into everyday policing decisions, rather than a replacement for human judgement. Think augmentation on top of automation.
We’ve personified this as a ‘digital bobby on the beat’.
In this framing, AI and facial recognition will be there to act as a seventh sense for officers, helping them to remember previous experiences, see patterns, surface relevant information and prioritise attention in environments where crime is fast-moving and increasingly digital. The value lies not in AI making decisions, but in improving the safety, quality and readiness of the snap judgments that officers take everyday in doing their work.
In our experience from delivering AI-enabled change across the justice system, including work supporting decision-making on HMCTS Common Platform, the most effective applications of AI have been those that reduce cognitive and administrative load while preserving accountability. AI works best when it helps professionals focus on substance rather than process, by bringing the right information into view at the right moment. We see the same principle reflected in the white paper’s emphasis on productivity, backlogs and consistency, even if it is not always stated explicitly.
This perspective is particularly important when considering facial recognition. Treated as a standalone capability, it risks being overinterpreted as an outcome in its own right and the fracturing of civil liberties towards a police state. But treated as part of a toolset that aids wider context and decision making, it would instead act as an early warning signal, similar to the role that established ANPR technology plays today. A facial match should inform judgement, not determine it. Its confidence level, provenance and relevance need to be visible and understood, and it must sit alongside other intelligence, data, and professional assessment. The white paper’s focus on national standards, oversight and transparency suggests an awareness of this, but delivery will need to reinforce it deliberately with strong AI governance underpinning the ways that ethics are managed.
We interpret the creation of Police.AI as an attempt to institutionalise this approach at scale, providing a consistent way to deploy AI that supports decision-making rather than fragmenting it into force-by-force experimentation. In our view, Police.AI’s success will depend on whether it behaves less like a technology lab and more like a decision-support platform, embedding AI into operational workflows in ways that are explainable, predictable and trusted by those using it.
None of this works without the data foundations the white paper rightly prioritises. We have seen AI programmes struggle not because the technology to enable it is complex, or that its algorithms are weak, but because data is fragmented, inconsistent or poorly governed. A digital bobby is only as good as the information it can access. National data standards, interoperability and shared platforms are therefore not peripheral enablers, but the precondition for AI and facial recognition to function safely and effectively in policing.
Taken together, we believe the white paper points towards a future where AI and facial recognition strengthen policing by enhancing human judgement rather than replacing it. The risk to avoid is a Robocop-style narrative from setting in. This will alienate citizens who have been fed a diet of dystopian headlines through the media, and interpret AI as the digitalisation of trust, context and accountability. But AI and facial recognition aided policing, if delivered well, will be viewed as a digital bobby that helps officers make better and faster decisions on the scene.
5. Getting the institution ‘AI-ready’
One area the white paper touches on only indirectly, but which we believe will be decisive in practice, is the sequencing and readiness required to deliver AI at scale in a system as complex as policing. The document sets out ambitious objectives for AI, facial recognition and data-driven capability, but it is relatively light on how these changes will be absorbed by the organisation itself.
In our experience, AI programmes in large public services rarely fail because the technology does not work. They fail because users and the organisation are not ready to receive it. Readiness here is not about user or organisational cynicism, lack of enthusiasm or intent, but it’s about the governance, skills, ownership and operational or domain integration that are needed to make the AI succeed. When AI is introduced before these foundations have been put in place, it tends to fragment rather than unify delivery. In a high profile environment that is constantly scrutinised, like policing, a failed AI programme can be reputationally harmful.
The white paper’s emphasis on national standards, Police.AI and consolidated digital services suggests an awareness of this risk. However, it also creates tension. Centralising AI capability can accelerate consistency, but it can also distance delivery from frontline reality if sequencing is wrong. We have seen this pattern before, where AI tools are technically sound but struggle to land because roles, responsibilities and workflows have not been redesigned to accommodate them, or that they have been underscoped in the design phase resulting in the AI being configured with a clear gap in process.
This risk is particularly acute in policing, where decision-making is distributed, time-critical and often discretionary. AI that is not clearly owned, understood or governed will rapidly introduce hesitation rather than confidence. Officers will either ignore it, over-trust it, or work around it. National dialogue will be critical about it. None of those outcomes deliver the benefits that the white paper is aiming for.
From our perspective, the most important question for Police.AI is not about what AI should be experimented with first leading to a deployment race, but where the organisation is most ready to absorb it. That means starting with use cases where data quality is understood, accountability is clear and outcomes are measurable. It also means being explicit about what AI is not being used for, so that expectations are set from the outset.
There is also a skills dimension that should not be underestimated. The white paper references workforce capability, but AI readiness is not limited to specialist roles. Supervisors, senior officers and operational leaders all need to understand what AI outputs mean, how much confidence to place in them, and when to challenge them. Unions and civil liberties groups should be carried along the journey. Without this shared buy-in and literacy, AI risks becoming either neutered, sidelined or misapplied.
In our view, achieving the aims of the white paper will depend less on the sophistication of models or algorithms, and more on delivery discipline. AI needs to arrive into a customised system that has been prepared for it, not one that is expected to adapt around it in real time.

Our recommendations to those delivering the reforms
The white paper sets an ambitious and necessary direction. It recognises that modern policing cannot function without modern data and digital infrastructure. Having absorbed the document, we believe that focus should be applied to these priorities:
- Prioritise interoperability and open standards: Mandate interoperability as a core design requirement. Insist on vendors adhering to these standards. Use interoperability requirements to adopt common data standards and open APIs for the seamless exchange of information.
- Treat national data standards as core infrastructure rather than guidance. If compliance is watered down so that it becomes optional or voluntary, then fragmentation will set in from the outset and it will be difficult to change course.
- Sequence the delivery. Prioritise data integration and quality improvement works so that they occur either before or in parallel with build and large-scale AI deployment.
- Engage stakeholders early. Proactively involve a broad range of stakeholders in the transformation journey. By including these groups in key stages, and by demonstrating how these tools will transform policing while being transparent around what safeguards exist will be key to sustaining consent.
- Start small, prototype, then scale. Use pilot programmes and sandboxes (potentially through Police.AI) to trial new tools, use cases or data integrations on a limited scale. Then rigorously evaluate outcomes, learnings, and public feedback to scale up.
- Govern Police.AI as a delivery capability. Encourage it to experiment, but ensure that clear and measurable operational outcomes are defined so that it doesn’t become a sole centre of (siloed) expertise.
- Define end-to-end use cases for facial recognition. Include data governance, human oversight and performance monitoring.
- Consider the benefits of transitional architectures. A ‘big bang’ is risky and more likely to fail, so allow time for legacy systems to coexist while national platforms mature.
- Align funding and accountability for digital transformation programmes. Use this to measure value and productivity gains realised over the financial investment sunk into the innovation. Use this insight to measure impact and to adapt the approach.