Article image

Edmonton's new high tech cops see everything. The question is, should they.

Imagine walking down a snowy street in downtown Edmonton. A police officer passes you, their uniform dotted with the usual gear: radio, handcuffs, a holstered sidearm. But there's something new clipped to their chest these days, something most people wouldn't notice unless looking closely. A body camera, yes, but not the kind that simply records. This one sees you. Really sees you. Powered by artificial intelligence, it compares your face against a database in real time, checking if you match anyone on what authorities delicately term a 'watch list.' No warrant needed. No reasonable suspicion required. Just the quiet whir of algorithms assigning probabilities to your humanity.

This isn't speculative fiction ripped from Minority Report. It's happening right now in Canada, where Edmonton's police force has become one of the first major North American departments to deploy AI equipped body cameras. While law enforcement frames this as a logical evolution of public safety technology, many civil liberties experts see something more sinister emerging. A fundamental rewiring of police community relations where suspicion becomes automated, accountability gets outsourced to code, and the burden of error falls disproportionately on marginalized groups.

The technology itself sounds deceptively straightforward. Facial recognition algorithms analyze live footage from officers' cameras, cross referencing faces with pre compiled databases. Matches trigger alerts, theoretically helping identify wanted individuals or missing persons. Proponents argue this brings efficiency to policing, allowing officers to 'focus on real threats.' But peel back that corporate marketing speak and you find disturbing implications hiding in plain sight.

First, consider the watch lists themselves. Who decides which faces get logged into these digital lineups? What criteria qualify someone for perpetual surveillance? Edmonton police have been vague, stating only that lists include people with outstanding warrants or those deemed 'high risk.' But historically, such classifications disproportionately target communities of color, activists, and low income neighborhoods. Once flagged, escaping the system's memory becomes nearly impossible. It's the technological embodiment of 'guilty until proven innocent,' with Silicon Valley efficiency.

Facial recognition isn't infallible, either. Studies consistently show higher error rates for women, people with darker skin tones, and ethnic minorities. The National Institute of Standards and Technology found some algorithms misidentify Asian and African American faces up to 100 times more frequently than Caucasian ones. Now imagine those flawed systems deployed during tense police interactions. A faulty match could escalate routine encounters into life altering disasters. Picture a officer reacting to an erroneous alert, hand drifting toward their weapon because a computer declared '90% confidence' in identifying a shoplifting suspect. Who bears responsibility when code replaces judgement? The answer, thus far, seems to be no one.

Edmonton's experiment reveals three troubling truths about AI's creep into law enforcement. First, the adoption happens faster than public debate. These cameras appeared with little community consultation, no meaningful legislative oversight, and scant transparency about their capabilities. Second, we're witnessing corporate interests shape public safety narratives. Major tech firms lobby police departments to become testing grounds for imperfect systems, treating human rights as secondary concerns. Third, once these tools embed themselves into policing infrastructure, removing them becomes politically difficult. Fear sells better than prudence.

Let's discuss everyday impacts. A construction worker heading home after night shift might get stopped because the system confuses him with someone who skipped bail. A teenager walking to school could enter permanent databases simply for existing near a 'high risk' area. Domestic violence victims whose abusers have police connections might find themselves targeted through watch list manipulations. The chilling effect on public spaces becomes palpable why attend a protest if facial recognition could tag you as a 'person of interest?' Why seek help from officers if their cameras automatically scan everyone nearby?

Business interests heavily influence this landscape. Companies like Axon (makers of Taser weapons) aggressively market surveillance systems as force multipliers, promising reduced crime through constant monitoring. But their profit models depend on cities buying endless upgrades: better cameras, faster software, larger data storage. It's the industrial surveillance complex, where safety gets commodified and civil liberties become roadblocks to quarterly earnings. When Axon's CEO mused about drones firing Tasers from the sky last year, it wasn't dystopian parody. It was a market pitch.

Regulators remain woefully behind the curve. Canada's proposed Artificial Intelligence and Data Act focuses mostly on commercial applications, not law enforcement. The European Union's AI Act bans real time facial recognition in public spaces, but North America lacks similar protections. Edmonton's pilot program operates in this gray zone, expanding capabilities without clear legal frameworks. Without strict guidelines governing watch lists, error transparency, and usage limitations, police essentially self regulate technology that could destroy reputations or lives with a false ping.

History offers grim precedents. In the 2000s, police departments raced to adopt ShotSpotter, acoustic systems claiming to pinpoint gunfire locations. Years later, investigations revealed frequent errors altering 911 calls to fit false alerts, disproportionately affecting Black neighborhoods. More recently, predictive policing algorithms like PredPol promised data driven crime prevention. Instead, they perpetuated biased patrol patterns by training on historically skewed arrest data. Now facial recognition risks cementing these failures into an automated feedback loop where overpoliced communities generate more 'suspicious' data points, justifying further surveillance.

Some argue resisting this technology means opposing progress. But progress implies improvement for all, not just efficiency for authorities. True innovation would address root causes like poverty and mental health crises, not expand tools for punitive surveillance. Norway invests in rehabilitation-focused prisons with open campuses and vocational training, achieving Europe's lowest reoffending rates. Portugal decriminalized all drugs in 2001, treating addiction as a health issue rather than criminal one. Violent crime plummeted. These approaches demonstrate technology isn't the only or best solution to societal challenges.

Moving forward requires immediate action. Legislators must ban real time facial recognition in public spaces until rigorous, independent audits prove minimal bias. Watch lists should face judicial review like search warrants. Companies profiting from police AI must assume liability for algorithmic errors. Perhaps most crucially, communities deserve veto power over surveillance tools deployed in their neighborhoods. If Edmonton residents overwhelmingly reject these cameras, that decision should bind authorities.

I don't write this as a technophobe. AI has wondrous potential, from medical diagnostics to climate modeling. But unleashing it in law enforcement without guardrails transforms tools of safety into instruments of control. The quiet creep of surveillance normalizes the unacceptable. One day we'll wake up wondering how we traded anonymity for the illusion of security, how we accepted being perpetual suspects in our own cities. Edmonton's cameras offer that future in miniature, a warning flickering in the winter dark. It's not too late to choose differently, to demand technology that serves people rather than monitors them. But time, like facial recognition databases, fills up faster than we think.

Disclaimer: The views in this article are based on the author’s opinions and analysis of public information available at the time of writing. No factual claims are made. This content is not sponsored and should not be interpreted as endorsement or expert recommendation.

Emily SaundersBy Emily Saunders