Whistleblower accuses Facebook of contributing to January 6 riot, Memo says
“We will continue to be the subject of scrutiny – some righteous and some unjust,” he said in the note. “But we also have to continue to hold our heads up high. “
Here is Mr. Clegg’s memo in full:
OUR POSITION ON POLARIZATION AND ELECTIONS
You will have seen the series of articles about us published in the Wall Street Journal in recent days, and the public interest in it. This Sunday night, the ex-employee who leaked internal company material to the Journal will appear in a 60 Minutes segment on CBS. We understand that the article is likely to claim that we are contributing to polarization in the United States and suggest that the extraordinary steps we took for the 2020 election were relaxed too soon and contributed to the horrific events of January 6. at the Capitol.
I know some of you – especially those of you in the US – are going to have friends and family questions about these things, so I wanted to take a moment as we head into the weekend. end to provide what I hope is useful context on our work in these crucial areas.
Facebook and polarization
People are naturally worried about divisions in society and are looking for answers and ways to solve problems. Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate takes place. So it’s only natural for people to wonder if this is part of the problem. But the idea that Facebook is the main cause of the polarization is not supported by the facts – as Chris and Pratiti put it in their note on the matter earlier this year.
The rise of polarization has been the subject of much serious academic research in recent years. In truth, there is not much consensus. But the available evidence just doesn’t support the idea that Facebook, or social media more generally, is the primary cause of polarization.
The increase in political polarization in the United States predates social media by decades. If it were true that Facebook is the main cause of polarization, we would expect to see it increase wherever Facebook is popular. This is not the case. In fact, polarization has decreased in a number of countries with high social media use at the same time as it has increased in the United States.
Specifically, we expect reports to suggest that a change in Facebook’s news feed ranking algorithm was responsible for the rise in polarizing content on the platform. In January 2018, we made rank changes to promote Meaningful Social Interactions (MSI) – so you can see more content from friends, family, and groups you’re a part of in your News Feed. This change was strongly driven by internal and external research which showed that meaningful engagement with friends and family on our platform is better for people’s well-being, and we have refined and improved it over time. over time, as we do with all ranking metrics. Of course everyone has a rogue uncle or old school classmate who has strong or extreme opinions that we don’t agree with – that’s life – and change means you are too. more likely to stumble upon their posts. Despite this, we have developed cutting-edge tools to remove hateful content and reduce the distribution of problematic content. As a result, the prevalence of hate speech on our platform has now fallen to around 0.05%.
But the simple fact remains that changes to algorithmic ranking systems on a social media platform cannot account for broader societal polarization. Indeed, polarizing content and disinformation is also present on platforms that have no algorithmic ranking, including private messaging apps like iMessage and WhatsApp.
Elections and Democracy
There is perhaps no other topic on which we have spoken more as a company than our work to radically change the way we approach elections. From 2017, we started building new defenses, bringing in new expertise and strengthening our policies to prevent interference. Today we have over 40,000 people across the company working on safety and security.
Since 2017, we have disrupted and removed more than 150 covert influence operations, including ahead of major democratic elections. In 2020 alone, we deleted over 5 billion fake accounts, identifying almost all of them before anyone reported them to us. And, from March to Election Day, we removed more than 265,000 pieces of Facebook and Instagram content in the United States for violating our voter interference policies.
Given the extraordinary circumstances of holding a controversial pandemic election, we implemented so-called ‘glass-breaking’ measures – and spoke publicly about them – before and after Election Day to respond to the specific and unusual signals that we see on our platform and to keep the content of the broadcast susceptible to violating before our content reviewers can assess it against our policies.
These measures were not without compromise – they are blunt instruments designed to deal with specific crisis scenarios. It’s like closing the roads and highways of an entire city in response to a temporary threat that may be lurking somewhere in a particular neighborhood. By implementing them, we know we’ve impacted significant amounts of content that wasn’t breaking our rules to put people’s safety first during a time of extreme uncertainty. For example, we have limited the distribution of live video that our systems believe could be election-related. It was an extreme step that helped prevent potentially violating content from going viral, but it also impacted a lot of completely normal and reasonable content, including some that had nothing to do with the election. We wouldn’t take that kind of rude, catch-all action under normal circumstances, but they weren’t under normal circumstances.
We only reversed these emergency measures – based on careful data-driven analysis – when we saw a return to more normal conditions. We left some of them for a longer period until February of this year and others, like not recommending civic, political or new groups, we decided to keep them permanently.
Fight hate groups and other dangerous organizations
I want to be absolutely clear: we are working to limit, not expand hate speech, and we have clear policies banning content that incites violence. We are not taking advantage of polarization, quite the contrary. We do not allow dangerous organizations, including militarized social movements or conspiratorial networks inciting violence, to organize on our platforms. And we remove content that praises or supports hate groups, terrorist organizations, and criminal groups.
We were more aggressive than any other Internet company in tackling harmful content, including content that sought to delegitimize the election. But our work to crack down on these hate groups has taken years. We removed tens of thousands of QAnon pages, groups, and accounts from our apps, removed the original #StopTheSteal group, and removed references to Stop the Steal before the launch. In 2020 alone, we removed over 30 million content that violated our terrorism policies and over 19 million content that violated our organized hate policies in 2020. We designated the Proud Boys as a hate organization by 2018 and we continue to remove praise, support and representation from them. Between August of last year and January 12 of this year, we identified nearly 900 militia organizations under our Dangerous Organizations and Individuals policy and deleted thousands of pages, groups, events, Facebook profiles and Instagram accounts associated with these groups.
This work will never be finished. There will always be new threats and new problems to solve, in the United States and around the world. This is why we remain vigilant and alert – and we always should.
This is also why the suggestion that is sometimes made that the violent January 6 insurgency would not have happened without social media is so misleading. To be clear, the responsibility for these events rests entirely with the perpetrators of the violence and with those in politics and elsewhere who actively encouraged them. Mature democracies in which the use of social media is widespread run elections all the time – for example the elections in Germany last week – without the disfiguring presence of violence. We actively share with law enforcement material that we can find on our services related to these traumatic events. But reducing the complex reasons for polarization in America – or the insurgency in particular – to a technological explanation is woefully simplistic.
We will continue to be subject to scrutiny – some fair and some unfair. We will continue to be asked difficult questions. And many people will continue to be skeptical of our motives. This is what comes from being part of a company that has a significant impact in the world. We must be humble enough to accept criticism when it is right and to make change when it is right. We are not perfect and we do not have all the answers. That’s why we’re doing the kind of research that was the subject of these stories in the first place. And we’ll continue to look for ways to respond to the feedback we receive from our users, including testing ways to make sure political content doesn’t take over their feeds.
But we must also continue to hold our heads up high. You and your teams are doing an incredible job. Our tools and products have an extremely positive impact on the world and people’s lives. And you have every reason to be proud of this work.