‘Crime before it happens’: new technologies, old promises
Predictive Policing is a buzzword gaining increasing attention both within and beyond academia. However, as is often the case when socio-technical phenomena are wrapped in clichéd tropes for communicative immediacy, utopian or apocalyptic fantasies cloud efforts to understand the implications of predictive algorithms in policing.
Predictive policing designates the use of algorithmic predictions to identify future targets for police intervention. Whilst this definition alone evokes futuristic imaginaries akin to the movie ‘Minority Report’, the actual practices falling under the umbrella of ‘predictive policing’ are as mundane as automated risk assessments applied to individuals or geographical areas. These assessments rely on scoring parameters inherited from decades of anthropological and ecological theories on crime aetiology, which is already a good reason to avoid overly fixating on the technological element, and instead begin by tracing the political agency behind automation. In short, we should focus on the ‘politics of technology’ rather than the ‘ethics of technology’. The very idea of pure automation – namely, a purely automated governance – seeks to craft an image of pacified power relations, while hiding the command structures and political subjectivity behind technology.
Compared to other legal fields impacted by big data analytics and AI tools, predictive policing differs in two important aspects. First, the algorithmic output may consist of an assessment applied to specific individuals or, alternatively, to geospatial areas. This renders it somehow fruitless to ground motions for ban or reform in exposing the risks of ‘profiling’ as generally understood in legal scholarship. In fact, the profiling argument is often eluded by simply asserting the non-invasiveness of place-based predictions. Without delving into the misleadingness of this last statement, the focus should rather be on discussing the possibilities of inspecting the inner workings of the software and, consequently, their predictive markers. Such a shift in focus would also allow questioning the role of private vendors, since although the police function is not per se outsourced by the adoption of predictive algorithms, police action is however prompted, in the majority of cases, by proprietary software marketed as trade secrets, thus leading to a phenomenon definable as soft privatisation.
The second noteworthy difference from algorithmic implementations in other branches of the legal system is that predictive policing comes into play at the level of crime prevention activities. This means that the algorithmic output is not meant to be officially communicated to any recipient waiting for a formal decision about his/her sphere of rights. The real and only ‘recipients’ – so to say – of the algorithmic oracle are rather the police agents. Consequently, legal countermeasures cannot consist in review mechanisms like the ones designed for the forensic context, where it does make sense to provide for the possibility to revert decisions based on automated processing by invoking the right to human intervention. Regrettably, both Directive EU 2016/680 and the recently adopted EU AI Act overlook these specificities, disproportionately emphasizing the ‘profiling-like’ nature of algorithmic outputs as the decisive criterion for assessing the lawfulness of predictive policing tools.
Carlo Gatti
The author is a doctoral researcher at the Faculty of Law of the University of Turku, where he is exploring the interconnections between policing and algorithmic governance.
ORCID https://orcid.org/0000-0002-8523-6973
UTU webpage https://www.utu.fi/en/people/carlo-gatti