Facebook Treats Its Ethical Failures Like Software Bugs, and That’s Why They Keep Happening
Facebook should have anticipated that its ad platform could be misused.
As the story of Russia, Facebook, and the 2016 US election drags on, Facebook’s been quick to open its wallet. It took out full-page ads in both the New York Times and the Washington Post earlier this month to tell the world how it’s going to protect us from election interference, and it’s aggressively pushing that message in a paid Google search campaign.
But opening up the black box so we can figure out what happened? Not so much. Congressional sources claim the company has been slow to cooperate with their investigation into Russian election interference. Meanwhile, the company scrubbed thousands of posts shared during the election from its system after learning that a researcher had used the data to show that their impact was much bigger than Facebook has claimed.
It’s no surprise Facebook is trying to avoid showing the extent of its dirty laundry. The company has a long track record of treating ethical failures like bugs to be fixed: say sorry, squash them down, and keep moving forward. That’s what happens when you build your company on a motto like “move fast and break things,” after all: every failure gets treated like an isolated incident, rather than part of a systemic pattern that needs systemic action. As a result, Facebook keeps making the same kinds of blunders, over and over again.
For example, consider a much simpler problem uncovered in its ad platform last month. A ProPublica investigation revealed that Facebook’s ad interface, which allows advertisers to choose which users see their messages, included audience categories like “Jew haters” and “How to burn jews.” The journalists decided to test whether the company would actually approve an ad targeting users in those categories. Within 15 minutes, their ad was live.
Facebook immediately removed the categories. “We never intended or anticipated this functionality being used this way,” COO Sheryl Sandberg wrote in a post outlining how Facebook would strengthen its tools.
But Facebook should have anticipated that its ad platform could be misused. In fact, ProPublica itself had already told it so—a year ago. Back then, the system allowed ProPublica journalists to buy a real-estate ad that targeted users according to their race or ethnicity—something that’s illegal under the federal Fair Housing Act of 1968. Facebook apologized, and stopped allowing housing advertisers to use those filters. In other words, it fixed the bug and moved on. But it didn’t look at how else its system might be perpetuating bias or hate.
It certainly could have. Facebook is worth nearly half a trillion dollars. It banked nearly $27 billion in advertising revenue last year alone. And, if you buy into Silicon Valley’s vision of itself, it’s staffed by some of the “best and brightest” people in the world. It absolutely had the resources to perform a far-reaching, systemic review of the vulnerabilities in its ad interface. It just wasn’t a priority. After all, advertisers were paying. Users were clicking. If money is the only measure, then the system was working fine.
That’s the problem. For years, tech has chased growth at all costs—and those costs have been paid by all of us.
We can see it way back in 2014, when Facebook launched Year In Review, a feature that allowed users to create and share albums of their highlights from the year. My friend Eric Meyer had been avoiding it: his six-year-old daughter Rebecca had just died of aggressive brain cancer. But around Christmas, Facebook created a sample Year In Review album for him anyway, and posted it to his page.
“Here’s what your year looked like!” the copy read. Below it was a picture of Rebecca—the most popular photo Meyer had posted all year. Surrounding her face were illustrations, made by Facebook, of partiers dancing amid balloons and streamers. The team had been so focused on designing for people with positive experiences that it hadn’t thought about what would happen when the system pulled in negative ones, and the result was heartbreaking.
Meyer’s story went viral, and Facebook vowed to do better (paywall). And it did, for a minute: 2015’s Year In Review feature was less pushy, and didn’t pull users’ personal content into Facebook’s design. But once again, the problem was reduced to a bug: fix it and forget.
Years later, Facebook is still conflating popular photos with positive ones.
Just last month, Guardian tech journalist Olivia Solon found out that Facebook was using one of her posts from Instagram (which Facebook also owns) in an advertisement to her friends. These ads are common: Facebook pulls a popular photo out of your Instagram feed and inserts it into an ad. Then it figures out which of your friends aren’t using Instagram, and promotes that ad on those friends’ Facebook pages. The idea is that those friends will see just how much fun you’re having on Instagram, and be enticed to sign up, too.
But Solon’s post wasn’t your typical Instagram fare—no puppies, beaches, or brunches. It was a screenshot she had taken of an anonymous email she received last year. “I will rape you before I kill you, you filthy whore!” it said.
Solon had posted the image on Instagram to illustrate the kind of abuse women get online. But just as Meyer experienced, Facebook took her photo out of the context in which she’d shared it, and placed it into its own. The result was unsettling: a rape threat wrapped in a peppy ad, and inserted in her friends’ feeds.
Facebook may not have intended to surface traumatic content, just like it didn’t intend to let advertisers post hateful or nefarious ads. But it didintend to prioritize rapid growth and user engagement over all else. Mark Zuckerberg said as much in his letter to investors when the company went public in 2012: “As most companies grow, they slow down too much because they’re more afraid of making mistakes than they are of losing opportunities by moving too slowly,” he wrote.
These priorities have consequences—and those consequences are now more far-reaching than ever, spilling over from affecting our emotional state to manipulating our social and political infrastructure, too. So the next time a Facebook exec says “sorry,” don’t praise their PR response. Ask them how much time, money, and staff they’ve committed to changing the policies, processes, and culture that allowed “engagement” to trump ethics in the first place. Demand that they look past simple fixes, and do the real work—or we’ll all pay the price.
NEXT STORY: Can FOIA be fixed?