How the spread of child abuse imagery online is changing the debate over encryption

How the spread of child abuse imagery online is changing the debate over encryption

Content warning: This post discusses an investigation into the proliferation of child sexual abuse imagery online.

There are internet problems, and there are platform problems. It’s a distinction I wrote about earlier this year, when trying to think through how tech companies should respond to the Christchurch killing. And it’s a distinction I thought about again this weekend, when I read the New York Times’ disturbing investigation into the rapid spread of child sexual abuse imagery on the internet.

Here’s the high-level overview from reporters Michael H. Keller and Gabriel J.X. Dance:

Pictures of child sexual abuse have long been produced and shared to satisfy twisted adult obsessions. But it has never been like this: Technology companies reported a record 45 million online photos and videos of the abuse last year.

More than a decade ago, when the reported number was less than a million, the proliferation of the explicit imagery had already reached a crisis point. Tech companies, law enforcement agencies and legislators in Washington responded, committing to new measures meant to rein in the scourge. Landmark legislation passed in 2008.

Yet the explosion in detected content kept growing — exponentially.

As you might expect, the investigation explores where to place blame for the growth of this kind of crime. And soon enough it comes to tech platforms — in particular Facebook Messenger. The reporters write:

While the material, commonly known as child pornography, predates the digital era, smartphone cameras, social media and cloud storage have allowed the images to multiply at an alarming rate. Both recirculated and new images occupy all corners of the internet, including a range of platforms as diverse as Facebook Messenger, Microsoft’s Bing search engine and the storage service Dropbox. […]

Encryption and anonymization can create digital hiding places for perpetrators. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports. Reports to the authorities typically contain more than one image, and last year encompassed the record 45 million photos and videos, according to the National Center for Missing and Exploited Children.

In a Twitter thread, Facebook’s former security chief, Alex Stamos, stood up for his old colleagues here: “I’m glad the NY Times is talking to the incredible people who work on child safety every day,” he wrote. “One point they seem to be a bit confused about: the companies that report the most [child sexual abuse material] are not the worst actors, but the best.” And indeed, if you talk to NCMEC and other organizations who work on this issue, they’ll tell you that they see tech platforms as essential partners in fighting child predators.

But what if tech platforms weren’t such good partners? And what if the reason was encryption?

It’s a tough debate, and it’s one that we’re about to walk straight into the middle of. The reason is Facebook’s plan to encrypt its core messaging apps — Messenger and WhatsApp — by default. The effect of the move on law enforcement’s ability to fight crime is unknown, but certain to be controversial.

I find the fears to be straightforward and rational. Today, thanks to Facebook’s efforts in particular, law enforcement detects millions of cases in which terrible images are being shared around the world. In thousands of cases a year, according to an event I recently attended at Stanford about encryption, this leads to arrests of the perpetrators. But if you were to shield all those messages using encryption, the argument goes, you would essentially be turning a blind eye to a disturbing and growing problem.

To some critics, the circumstances offers cause to dramatically reduce speech on Facebook products. Damon Beres makes his case in OneZero:

It may simply be impossible to moderate the content that is exchanged between all of those people. But maybe there’s a simpler, blunter approach. We take for granted that you can send images, links, and videos on Messenger, but what if you… couldn’t? What if we’ve gotten the cost-benefit of being able to send a video on such a large, central platform wrong? Messenger could simply be text-based, as old messaging services were: Easier to moderate automatically, and without the risk of harmful videos or images being distributed. There’s an even stronger argument that the same calculus might be applied to Live videos on Facebook, which have previously allowed people to broadcast shooting rampages and suicides. True, some users would go elsewhere, the content would persist in some fashion, but it would not be supported by the dominant social network. There is a chance, at least, that its creation and distribution would be impeded in some way, especially if other companies followed suit.

I’m sure the idea of banning all link- and image-sharing in Messenger will find favor in, for example, authoritarian governments. Just imagine the nettlesome dissent that gets spread via links and images! And yet it also seems notable that not even Russia or China have taken such an extreme step — they have instead ramped up their dystopian surveillance operations in an effort to root out dissent at the source.

In a more measured (and members-only) post, Ben Thompson still takes a dim view of Facebook’s plans for the default encryption of its messaging apps:

Evil folks will always be able to figure out the most efficient way to be evil. The question, though, is how much friction do we want to introduce into the process? Do we want to make it the default that the most user-friendly way to discover your “community”, particularly if that community entails the sexual abuse of children, is by default encrypted? Or is it better that at least some modicum of effort — and thus some chance that perpetrators will either screw up or give up — be necessary?

To take this full circle, I find those 12 million Facebook reports to be something worth celebrating, and preserving. But, if Zuckerberg follows through with his “Privacy-Focused Vision for Social Networking”, the opposite will occur.

To state the obvious: the trade-offs involved in the discussion of encryption vs. security are agonizing. It’s easy to defend encryption in the context of most private discussions between adults, whether it’s dissent against the government or of a more personal nature. It’s much harder to defend encryption when it’s being used to share images of child abuse, or to plan terrorist acts. And we lack easy methods for balancing the risks versus the benefits. How much freedom does an encrypted messaging platform have to support, to make up for the terrorism that it might contribute to? How do you design that test?

One way we can approach the problem is by thinking about it in terms of internet problems versus platform problems. As I wrote earlier this year:

Platform problems include the issues endemic to corporations that grow audiences of billions of users, apply a light layer of content moderation, and allow the most popular content to spread virally using algorithmic recommendations. Uploads of the attack that collect thousands of views before they can be removed are a platform problem. Rampant Islamophobia on Facebook is a platform problem. Incentives are a platform problem. Subreddits that let you watch people die were a platform problem, until Reddit axed them over the weekend.

Internet problems include the issues that stem from the existence of a free and open network connecting all of humanity together. The existence of forums that allow white supremacists to meet, recruit new believers, and coordinate terrorist attacks is an internet problem. The proliferation of free file-sharing sites that allow users to post copies of gruesome videos is an internet problem. The rush of some tabloids to publish their own clips of the shooting, or analyze the alleged killer’s manifesto, are an internet problem.

Viewed this way, I see the spread of child abuse imagery online as much more of an internet problem than it is a platform problem. It’s true that platforms provide an easy way to disseminate this content — but it’s also true that predators have many, many alternatives to Messenger, and actively use them. I’ll never forget the shudder of a person who used to work at the Tor Project when they told me that a meaningful percentage of the site’s users at any given time appeared to be actively engaged in sharing child abuse imagery.

And that’s to say nothing of the other big platforms where child abuse imagery lives. These files exist and are transmitted on iOS, Android, Mac, and Windows, to name four big ones. Should we compel those platforms to scan user screens periodically and check them against hash lists of known child abuse imagery? It’s possible to do that without involving the encryption debate at all — users’ screens aren’t encrypted. Does that make it a better idea, or a worse one?

Child abuse imagery is an internet problem because it’s fundamentally about how the friction involved in bad people meeting one another, and enacting awful schemes, has now dropped to zero. You could close every big tech company on earth and, assuming the the TCP/IP protocol still existed, still find that child abuse imagery was spreading around the world.

In the meantime — happily — it’s an internet problem that tech platforms have worked actively to solve. I’m sure they could work harder and do more, but it’s notable that at a time when people hate platforms for almost everything, the people closest to the subject — the FBI and NCMEC, to name two — seem genuinely pleased with the partnerships they have. It might not be possible to ramp these efforts up, or even preserve them as is, in a world where encrypted communications are the default.

But it’s also worth trying. These images will continue to proliferate around the internet regardless which platforms are currently dominant. To focus narrowly on the question of how they are transmitted lets a great many people — and companies — off the hook. A solution that preserves encryption while automatically checking shared images or links for connections to known child abuse imagery and reporting it to law enforcement might not be possible. But before we give up on the idea of private communication online, we ought to look for one.

The Ratio

Today in news that could affect public perception of the tech platforms.

Trending up: YouTube updated its filtering system for comments to make it easier for creators to find questions and sort comments by subscriber count. The moves could make it easier for creators to find the non-awful comments on their posts. (Julia Alexander / The Verge)

Trending sideways: A new app claims it can identify venture capitalists using facial recognition. The technology is banned in San Francisco and Oakland for use by government agencies and law enforcement, but it’s still legal in the private sector. (Zoe Schiffer / The Verge)

Trending down: The senior Twitter executive with editorial responsibility for the Middle East is also a part-time officer in the British Army’s psychological warfare unit. The Brigade uses Twitter, Instagram and Facebook, as well as data analysis and audience research to wage “information warfare,” raising questions about whether the executive’s role at Twitter presents a conflict of interest, (Ian Cobain / Middle East Eye)

Trending down: NextDoor, a website for neighborhood news, has become a hotbed of anti-homelessness content. People often vent about homeless issues and encampments on the website, but individuals experiencing homelessness can’t even access the site, because it requires a verified address. (Rick Paulas / OneZero)

Governing

Trump’s reelection campaign launched an anti-impeachment ad blitz on Facebook, spending as much as $1.4 million. The ads included misleading information about four congresswomen of color who Trump refers to as the “socialist squad.” Facebook said none of the ads violated its policies, according to Isaac Stanley-Becker and Tony Romm at The Washington Post:

In total, the Trump campaign and its backers spent between $346,700 and $1,430,182 on more than 2,000 ads for its Facebook page from Monday to midday Friday, according to data analyzed by Laura Edelson, a researcher at New York University’s Tandon School of Engineering. She obtained the data through Facebook’s public ad archive, which reports all of its data in ranges, not precise figures. Those ads had been viewed between 13.3 million and 25.3 million times, the NYU analysis found.

On Tuesday and Wednesday alone, the campaign shelled out about $500,000 on Facebook ads, according to figures tallied by ACRONYM, a digital outfit focused on liberal causes. On Wednesday alone it spent about $350,000, an amount it typically spends in a week.

The online offensive offered a window into Trump’s bare-knuckle approach to the coming impeachment battles, as he took the showdown to his favored terrain: the Internet. Already, campaign officials say they have filled their coffers with contributions: Eric Trump, the president’s second son and the executive vice president of the Trump Organization, said Thursday that the campaign had raised $8.5 million in the previous 24 hours.

Former Facebook exec Alex Stamos corrected early reports on the Cloud Act — a law that gives overseas courts access to user data from American tech companies after Bloomberg said the agreement, which is about to be signed between the US and UK, would force American tech companies to share encrypted messages with British police to aid in criminal investigations. In fact the treaty has nothing to do with encryption, he explained. (Alex Stamos / Twitter)

Lawmakers can’t agree on a federal online privacy law, so tech companies will likely have to comply with the California Consumer Privacy Act once it goes into effect January 1st. The tech industry had been hoping a federal law would pass and spare them from having to deal with a potential patchwork of privacy laws across 50 states. (Nandita Bose, Diane Bartz / Reuters)

Facebook’s Sheryl Sandberg could testify before Congress about the company’s cryptocurrency Libra. David Marcus testified about Libra in July, but apparently failed to satisfy lawmakers, who have been increasingly vocal about their concerns. (Christopher Stern / The Information)

A Director at Stanford’s Center for Internet and Society documented what happened when her husband reported her Facebook post of a nude sculpture model for violating the company’s guidelines. She used it as an opportunity to look at the limits of content moderation at scale. (Daphne Keller / Boing Boing)

Trump referenced a Russian-promoted 4Chan conspiracy theory about cybersecurity firm CrowdStrike during his call with Ukrainian president Volodymyr Zelensky. (Ryan Broderick / BuzzFeed)

Elizabeth Warren announced a plan to reinstate the Office of Technology Assessment, which was created to help Congress understand and legislate issues involving science and technology. The goal is to make lawmakers more knowledgeable about technical issues — and less susceptible to the influence of lobbyists. (Makena Kelly / The Verge)

Antitrust investigators in the House scrutinized Google’s plans to use a new internet protocol that would encrypt internet traffic, making it harder for hackers to get people’s data. Lawmakers are concerned the new standard would give Google a competitive advantage. (John D. McKinnon and Robert McMillan / The Wall Street Journal)

Former Tumblr executive Mark Coatney argued that instead of breaking up big tech companies, we should create non-profit alternatives — a PBS of sorts, for social media. (Mark Coatney / The New York Times)

The state department announced further sanctions against Russian nationals who ran and financed the Russia’s infamous Internet Research Agency, which sought to influence the 2016 election. (Graham Brookie / Twitter)

How the US military hacked into ISIS computers to cripple their media operations. (Dina Temple-Raston / NPR)

Industry

Facebook has been slow to regulate sponsored content on Instagram — and that could threaten the app’s originality and appeal. In the first quarter of 2019 alone, advertisers in the US and Canada spent an estimated $373 million on influencer marketing — about $265 million of which was on Instagram. That’s up 62% from the same period a year earlier, report Georgia Wells and Jeff Horwitz in The Wall Street Journal:

Done well, the marketing appears more authentic than glossy media campaigns on TV or in magazines. Done poorly, it comes across as mass-produced, tarnishing the platform’s appeal and driving away users.

Whether to tightly regulate the Instagram market of paid endorsements was “one of the toughest questions we had to face,” Instagram co-founder Kevin Systrom said at a conference in March, discussing his decision to leave the company last year.

“I guess we made the decision that we were going to have the wait-and-see approach,” he said. “The thing I’m bummed about is Instagram feels less authentic over time because of it.”

Instagram launched a new branded account called @creators, to encourage aspiring influencers to keep making content. As the piece notes, Instagram still offers creators no direct way to monetize their content, even as analysts estimate that it would be worth $200 billion as a standalone company. (Ashley Carman / The Verge)

Instagram is also testing out a new feature that allows brands to set up reminder notifications when new merchandise drops. It’s one more way that the app is transforming into a shopping mall for some users. (Ashley Carman / The Verge)

A fitness influencer and bikini model was sentenced to five years in prison after she created 369 Instagram accounts to troll other bodybuilders. It’s one thing to create, say, 350 troll accounts dedicated to attacking your mortal enemies. But 369? For shame, madam. (Deja Monet / Hollywood Unlocked)

Facebook confirmed the death of employee Qin Chen by suicide. Chen died September 19th, leading to a protest outside headquarters last week from people demanding more information about the software engineer’s death. His family has hired a law firm to investigate the circumstances surrounding his passing. (Salvador Rodriguez / CNBC)

Ad tech companies spend about $235 million annually on sites known to publish misinformation, a new study shows. Researchers found that Google served about 70 percent of the websites sampled. (Cristina Tardáguila, Daniel Funke and Susan Benkelman / Poynter)

LinkedIn CEO Jeff Weiner announced a new initiative to help close the “network gap” — the advantage some people have over others based on who they know. Previously, users were encouraged to limit their connections to mostly people they’d met. But now engagement is tanking and so LinkedIn is dressing up a growth hack in the language of social progress. (Sara Fischer / Axios)

TikTok’s parent company, ByteDance, had a better-than-expected first half of the year, booking revenue of $7 billion to $8.4 billion. (Yingzhi Yang, Julie Zhu / Reuters)

The 26-year-old man accused of killing 10 people with a van in Toronto last year told police he identifies as an incel and was radicalized on Reddit and 4Chan. (Matt Novak / Gizmodo)

And finally…

Musk broke the law with anti-union tweet, judge rules

In the long history of the Never Tweet movement, few major CEOs have made such a solid case for the idea as Elon Musk. After getting in trouble with the Securities and Exchange Commission for saying he planned to take Tesla private, Musk was forced out as Tesla chairman, paid a huge fine, and agreed to vet his tweets with a lawyer before posting them.

Well anyway, he tweeted again:

Tesla and its CEO, Elon Musk, violated federal labor laws when it tried to hamper union organizing at its Fremont factory, a federal administrative law judge in California ruled on Friday.

Among other things, Tesla security guards repeatedly ordered union organizers to stop leafletting in Tesla’s parking lots and fired one union organizer for allegedly lying during a company investigation. Elon Musk was also dinged for a tweet that suggested employees would no longer receive stock options if they voted to form a union.

Never tweet.

Talk to us

Send us tips, comments, questions, and nothing else for today, thanks: casey@theverge.com and zoe@theverge.com.

 

Read More