The Always-On Police Camera
A new patent proposes body cameras that can automatically activate in response to the sound of gunfire, or scan crowds for criminal suspects. What does that mean for expectations of privacy in public spaces?
Last summer, Baltimore police officer Richard Pinheiro submitted body-camera footage as evidence in a drug bust. In Pinheiro’s video, filmed on an Axon Body 2 Camera, he wanders through a junky backyard for a few moments before spotting, among the detritus, a discarded soup can. He picks it up and pulls out a small baggie of white pills that he and the two other officers would later claim belonged to the suspect. Pinheiro and the other officers arrested the man, then submitted the evidence against him—the baggie, their testimony, and the video—to the Baltimore Police Department.
What Pinheiro and the other officers didn’t seem to realize was that the Axon II Body Camera has a “failsafe” feature. The camera is always on and always saves the 30 seconds of footage prior to the officer activating the “REC” button. Those 30 seconds told an entirely different story. In footage Pinheiro was unaware anyone—let alone a jury—would ever see, he pulls a baggie of drugs from his pocket. In full view of the two other officers, he places the baggie in the soup can and drops it on the ground. Pinheiro then presses record and, with the cameras rolling, serendipitously “discovers” the soup can.
Pinheiro was eventually suspended, arrested, and, in January, indicted by a grand jury on charges of “tampering with or fabricating physical evidence” and misconduct in office—committing a “wrongful and improper act in the performance of his official duties.” The suspect was released from jail and his charges were dropped.
Baltimore attorneys would eventually drop more than a hundred cases involving Pinheiro and the other officers in the video, creating a months-long scandal that summer as officers admitted to “re-creating” evidence finds with their cameras. Then-commissioner Kevin Davis was forced to issue an internal memo to all officers forbidding the practice: “In the event your body worn camera is not activated during the recovery of evidence,” the memo reads, “under no circumstances shall you attempt to recreate the recovery of evidence after re-activating your body-worn camera.”
In July 2014, after a brutal summer of police-involved shootings of unarmed black men, civil-rights groups rallied behind cameras like Baltimore’s as accountability tools. They have since revealed themselves to be vulnerable to manipulation, so now, new technology is being offered as a stopgap. In mid-September, Digital Ally, a cloud-storage and video-imaging company, announced a series of patents for cameras that would automatically be triggered by various stimuli, not just an officer pressing record. Theoretically, these would end the problem of both re-creation and cameras inexplicably failing to record use-of-force scenarios.Some of the “triggering events” in Digital Ally’s patents are for crisis response, in the event of a car crash or a gun being pulled from its holster. But some of the auto-triggers would cause the cameras to record simply as police move through public spaces. As described in one patent, an officer could set their body camera to actively search for anyone with an active warrant. Using face recognition, the camera would scan the faces of the public, then compare that against a database of wanted people. (The process could work similarly for a missing person.) If there’s a match, the camera would begin to record automatically.
“Facial recognition is probably the most menacing, dangerous surveillance technology ever invented,” Woodrow Hartzog, a professor of law and computer science at Northeastern University, told me in an email. “We should all be extremely skeptical of having it deployed in any wearable technology, particularly in contexts were the surveilled are so vulnerable, such as in many contexts involving law enforcement.”
Mobile, instantaneous facial recognition is still technically infeasible because of the enormous processing demands—but when it comes, both the right and the ability to exist anonymously in a crowd would disappear. Consider Oregon, where the public generally has the right to refuse to show ID to police if they’re not suspected of any crime (and where police tested Amazon’s “Rekognition” software, scanning public CCTV footage to match people’s faces against a mug-shot database). Individually identifying each person in a crowd would take officers hours, and many would be within their rights to refuse. Automatic facial recognition conserves man power, ostensibly to enhance public safety, but it also mandates each person in public have a searchable identity.
In the aftermath of the protests following the 2014 shooting death of Mike Brown, the Department of Justice investigated the Ferguson, Missouri police. Its report asserted Ferguson police’s “law-enforcement efforts are focused on generating revenue.” The DOJ claimed the Ferguson PD engineered a racist, lucrative revenue model wherein officers targeted black drivers for stops and searches, penalizing them with citations and issuing arrest warrants for missed payments at a much higher rate than non-black drivers. Body cameras equipped with facial recognition, rather than holding police accountable, would enable such a system by making simply walking outside at risk.
The patents also imagine different types of audio triggers for the cameras. “Raised voices” and “vocal stress” could activate them, as could the sound of gunfire. Police already use “acoustic surveillance” to listen for gunfire, particularly in cities like Chicago and Oakland that struggle with gun violence. The patents also propose neighborhood specificity: Police departments could create geofences, making it so that police entering specific areas always trigger the cameras. And finally, the patents also cover biometric triggering events. Officers would be equipped with sensors measuring their vitals: heart rate, breathing, blood pressure, etc. When “biometric stress” is detected, the cameras will begin to record.
Auto-record isn’t entirely new, nor entirely foolproof. In July of last year, Justine Damond, an unarmed Australian woman, was shot and killed by Minneapolis police responding to her 911 call. Both responding officers were wearing body cameras, and as media coverage would later point out, responding officers that night had equipped Axon Signal, a device that automatically turns on officer body cameras in event of a crash or shooting, but no footage was ever recorded of that night. Neither officer turned on their individual camera or the vehicle-equipped camera.
Technological stopgaps and failsafes can only go so far in maintaining consistent standards for police accountability. A March report from the tech-policy think tank Upturn found that 40 percent of body-camera footage of officer-involved shootings in 2017 was never released to the public. Further, many departments have moved to rewrite their internal policies to make it more difficult for the public to request footage.
“If the goal of body cameras is to capture when force is used, then activation upon when a gun is drawn or fired, a siren is turned on, or perhaps even when the police car door opens might make sense,” Hartzog said. As he explains, having cameras set to “off” by default, but triggered in specific scenarios, can guard against “purpose drift,” where technology is introduced for one purpose then used for another. The triggers, ideally, would maintain consistency.
Having cameras always on, always recording, may be a surveillance nightmare, but leaving it completely up to officer discretion, even with failsafes, risks manipulation or misconduct. In certain spaces of social life, like airports, we’re willing to accept a forced lack of anonymity. But when we let police set the boundaries of permanent suspicion, we risk a world where going out in public is a transaction: We have to exchange anonymity for the right to be assumed innocent.