This Is the Week That the Drone Surveillance State Became Real
Local police have access to drones and footage from those flying cameras will be automatically analyzed by AI systems not disclosed to the public.
Affordable consumer technology has made surveillance cheap and commoditized AI software has made it automatic.
Those two trends merged this week, when drone manufacturer DJI partnered June 5 with Axon, the company that makes Taser weapons and police body cameras, to sell drones to local police departments around the United States. Now, not only do local police have access to drones, but footage from those flying cameras will be automatically analyzed by AI systems not disclosed to the public.
Footage will be uploaded or streamed to Axon’s digital cloud for police cameras, like the body cameras it currently sells, where it can be analyzed by Axon’s AI and used for anything from crowd monitoring to search and rescue, the company writes on its website.
This sounds vague, but AI research published two days earlier by academics from India and the U.K. shows exactly how such a drone system could be used. The paper, titled “Eye in the Sky,” details how drone footage could be used to detect “violent individuals” in real-time.
To train the AI, the researchers flew a drone to snap 2,000 images of people pretending to hurt each other. But since the researchers didn’t use real data, and carefully staged the information the AI analyzed to learn the difference between violence and normal human motion, there’s no guarantee that it would work well in the real world, David Sumpter, author of Outnumbered: Exploring the Algorithms that Control Our Lives, wrote on Medium.
“What the algorithm has done is classify individuals with their hands or legs in the air as violent,” Sumpter wrote. Others, like Google ethics hawk Meredith Whittaker, tweeted that the research was flawed and that “AI is facing a mounting ethical crisis.”
Other researchers, including Google and Facebook, have also done extensive research trying to solve the same problem of tracking how people move, and what those movements might indicate they’re doing.
But the “Eye in the Sky” researchers are trying to associate certain poses with intent. The researchers don’t discuss any false-positives for actions that could be misinterpreted as violence, such as play-fighting, knocking a fly off someone’s shoulder, or grabbing someone who’s falling.
Regardless of questions surrounding the AI’s legitimacy, The Verge reports that the researchers have gotten approval to test the technology during two festivals in India in October.
With $160 for a Parrot drone like the “Eye in the Sky” team used, and a few pennies for Amazon’s cloud AI platform, anyone with a coding background could have made a similar system. Even teens and cucumber farmers are building their own AI systems.
But the “Eye in the Sky” paper was published publicly, unlike Axon’s technology. The firm, which has acquired two AI companies and has access to petabytes of police camera footage, has no obligation to disclose how the AI was trained or whether it’s accurate or unbiased. Axon wasn’t immediately available to explain how it analyses footage captured by its cameras.
Other kinds of AI, like facial recognition, are already in use in China to surveil ethnic minorities. U.S. authorities, like the Chicago police department are already starting to adopt AI systems for predictive policing, another pursuit rife with bad data. In Baltimore, a plane loaded with cameras was secretly deployed as a surveillance tool until 2016. But now, AI-powered surveillance systems are becoming as easy for a police department to order as a car, handcuffs, or anything else they feel they need to effectively do their jobs.
And now, drones are already getting the green light for police use, not just for surveillance but also to dispense pepper spray and tear gas.
NEXT STORY: FCC Bans Surprise Phone Charges