Issue 25: Predictive Policing Is Making Marginalised Communities Live Under Permanent Suspicion
Predictive policing assigns suspicion before actions occur. This may erode community trust and fairness while embedding unexamined algorithmic influence into everyday law enforcement practice.
Predictive policing has turned suspicion into a routine administrative category that follows people long before any conduct occurs, and it is astonishing how easily this practice has been accepted as a normal feature of public safety. The idea that an algorithm can pre-assign risk to entire neighbourhoods and individuals has been absorbed into policing culture with little discomfort, even though the practice alters long standing assumptions about fairness, due process and the meaning of community trust. The technology has not simply introduced new methods but it has redefined the threshold at which suspicion begins, and it has done so without meaningful and transparent legal scrutiny.
I want to spend time on an issue that has been growing in scale and consequence, yet it still feels misunderstood in public debate.
I am referring to predictive policing, a set of technologies that use large pools of data to indicate where law enforcement attention should concentrate.1
I have observed governments, police forces, and private vendors speak about predictive policing as if it represents a neutral and objective way to allocate resources.
I have also watched the public absorb these narratives with a mixture of curiosity and unease.
I want to offer clarity about why this technology matters, how it transforms the experience of living in a community, and why it normalises suspicion long before any crime is committed.
The way predictive policing works has a direct influence on how neighbourhoods are perceived, how people are categorised, and how law enforcement behaves.
It is important to understand that predictive policing does not operate as a futuristic idea. It exists today in different countries and in many cities across the world. It is not an idea waiting to be tested. It is already being deployed.
I approach this subject as someone who studies and also teaches the relationship between law, technology, and society.
I spend a considerable amount of time reading legal documentation, technical reports, and implementation stories about these tools.
Predictive policing works through the collection and analysis of large amounts of data, including historical crime records, arrest data, social network associations, and a variety of contextual details.
The system processes this information and produces indications about where police officers should patrol or who they should focus on.
It presents these outputs as neutral observations.
A police department may receive a notification that a particular neighbourhood might experience crime within a certain period or that a particular individual might be at risk of involvement in criminal activity.
This may seem harmless and appear like a helpful tool.
In fact, it looks like data is offering support to public safety professionals.
The story becomes more complicated when we examine how the data is produced, how the system interprets it, and how the outputs transform behaviour.
Historical crime data is rarely a clean record of events.
It is influenced by years of policing patterns, community interactions, and institutional priorities.
If a community receives disproportionate police attention, the data will reflect that increased presence.
Increased presence results in increased arrests, even when the actual level of crime might not be higher than neighbouring communities.
The data feeds back into the system, which then tells the police to return to the same neighbourhood. The loop continues.
In many countries, predictive policing amplifies historical patterns without critically evaluating them. This is how suspicion becomes normalised long before anything happens.
I want you to imagine how it feels to live in an area that appears frequently in these predictions.
I said imagine rather than picture an analogy because this is not a metaphor.
Residents begin to notice police officers spending more time near their homes, walking on their streets, and conducting more stops. Police presence gradually becomes part of daily life.
People begin to feel observed. Some might feel safer at first, but many begin to feel targeted.
Teenagers walking to school are questioned. Adults leaving for work notice patrol vehicles lingering.
The message embedded in these actions is that the neighbourhood requires surveillance. It affects how residents understand themselves.
It affects how they think others see them.
Suspicion moves from the environment to individuals.
Certain predictive policing systems produce lists of people who are classified as likely to be involved in criminal activity.2 These individuals may not have committed a crime.
Their names appear because they have friends or relatives who have been arrested or because they live in areas with high levels of police activity.
Some systems categorise them because of past incidents that have nothing to do with future behaviour.
People on these lists often do not know they have been categorised in this manner.
They experience unexpected visits from law enforcement. They feel pressure to account for their movement.
They sense that their name carries an invisible mark created by a machine.
I want you to see how suspicion becomes embedded. A community or a person is identified through a technical process that appears neutral.
Police officers can act on this information. Their actions create more data. The system absorbs the data and produces a similar outcome.
Each time this cycle repeats, suspicion appears more legitimate. People begin to say that these neighbourhoods always have crime.
People begin to say that certain individuals seem to be involved in suspect activities.
People forget that the perception was constructed through a technical loop rather than emerging from an independent assessment of facts.
It is important to understand that predictive policing does not emerge from malicious intent!
Many police departments genuinely believe the technology will help them.
They believe it will reduce crime, help them allocate limited resources, and reduce guesswork.
Technology companies promote these tools as solutions that draw on modern computational methods and data analysis.
The result is a set of practices that police officers come to trust without fully understanding how the predictions are constructed. This trust gives the system authority.
Once a system has authority, it becomes very difficult to challenge.
I have spoken with individuals who believe predictive policing simply automates what police departments already do.
It is true that police officers often rely on intuition and local knowledge when allocating resources.
Intuition can be biased. Data driven tools appear more objective. This is where the danger resides.
When a biased dataset is presented as objective intelligence, the entire system becomes insulated against critique.
It becomes harder to point out that the underlying data reflects years of unequal attention. It also becomes harder to question the conclusions. People begin to think the technology has independent insight.
Law plays an important role here.
Predictive policing does not live outside the structure of legal rules. It must comply with constitutional principles, human rights protections, procedural fairness, and public accountability mechanisms.
Legal systems are not always ready to review how predictive tools categorise individuals.
Courts often see predictive tools as internal police processes.3
The legal system does not always require disclosure of how predictions are created.
Individuals who are affected by these predictions often have no avenue to challenge the categorisation.
This is how predictive policing gradually normalises the idea that suspicion can arise without any clear demonstration of wrongdoing.
Some communities see an increase in tension between residents and law enforcement. Others experience a gradual erosion of trust.
When a community feels that policing is based on algorithmic assumptions rather than lived experiences, relationships deteriorate.
Community members feel watched rather than supported. Police officers feel justified in their presence because the system gave them a reason to be there. Each group begins to misunderstand the other.
The misunderstanding is amplified each time the system reinforces the same prediction.
Individuals who are placed on predictive lists experience a more personal form of harm. They lose control over how they are perceived.
Their classification follows them in the background. It influences how officers interact with them. It influences how others talk about them.
This classification often has no clear path for removal. People live with an unspoken sense of scrutiny. This scrutiny can create emotional and psychological strain.
They carry the feeling that something external has defined them without their participation.
I want to highlight that predictive policing is not only a policing issue. It is a societal issue.
When communities begin to internalise the idea that some individuals are suspicious before they act, social cohesion weakens.
People feel pressure to distance themselves from individuals who might attract police attention.
People begin to self censor. They worry about where they go, who they speak with, or how they conduct simple daily routines.
The awareness that an algorithm might interpret these patterns differently adds an invisible layer of anxiety.
These systems also introduce a form of administrative opacity. Decisions made by predictive tools are often explained through abstract descriptions of risk.
People cannot see the complete dataset. They cannot examine the model. They cannot review the reasoning.
Police departments may not understand the model themselves.
Technology companies mischievously protect their algorithms as intellectual property.
Courts often accept these explanations, especially in early stages of deployment.
This lack of transparency restricts accountability. Accountability becomes difficult when individuals and communities do not know how the system reaches its conclusions.
When police officers rely on predictive tools, their actions seem more defensible.
A patrol in a neighbourhood becomes routine. A stop becomes part of the standard procedure. An investigation into a listed individual becomes an expected task.
Suspicion becomes embedded into everyday policing. This is how predictive policing normalises suspicion long before any crime occurs.4
The process unfolds gently but consistently. It alters behavioural expectations. It modifies community interactions. It changes the experience of being a citizen.
I approach this subject with a recognition that technology will continue to influence public safety practices.
Predictive policing is one expression of a broader trend in the integration of data driven decision making in public institutions.
I believe people should understand predictive policing clearly, not with exaggeration or fear, but with careful attention to how the technology functions and how it influences society.
Understanding gives communities the ability to participate in discussions about appropriate use, oversight, and accountability.
I am not offering advice, but rather inviting constructive thoughts.
Predictive policing raises significant questions about fairness, transparency, trust, and the lived experience of communities. These questions deserve an open discussion.
People should be part of these conversations.
When people understand how predictive systems work, they can reflect on how the technology influences their daily lives.
Ferguson, Andrew Guthrie, Policing Predictive Policing (April 15, 2016). Washington University Law Review, Vol. 94, No. 5, 2017, Available at SSRN: https://ssrn.com/abstract=2765525
Pearsall, B. (2024). Predictive policing: The future of law enforcement? NIJ Journal, Issue No. 266. U.S. Department of Justice, National Institute of Justice. https://www.ojp.gov/pdffiles1/nij/230414.pdf
Brayne, S., & Christin, A. (2021). Technologies of Crime Prediction: The Reception of Algorithms in Policing and Criminal Courts. Social problems, 68(3), 608–624. https://doi.org/10.1093/socpro/spaa004
Tang, J., & Volpi Hiebert, K. (2025, May 22). The promises and perils of predictive policing. Centre for International Governance Innovation. https://www.cigionline.org/articles/the-promises-and-perils-of-predictive-policing/




Predictive systems often repeat old patterns and create new forms of "bias" scrutiny that people never agreed to carry.
Predictive policing often shows how difficult it becomes to challenge a classification once it enters an official system. Even when individuals are unaware of being categorised, the effects can follow them in subtle ways such as increased interactions with law enforcement or assumptions about their activities. People rarely see the process that places them into these categories, yet the categories influence decisions around them. Noticing this pattern across different systems encourages a more thoughtful approach to technology.