Commentary by Kalev Leetaru originally published by RealClearPolitics.com
Last week, the New York Times finally acknowledged that the New York Post’s October 2020 story about Hunter Biden’s laptop was real. When the story broke in the Post in the days leading up to the 2020 presidential election, it was widely dismissed as “Russian disinformation,” with social media platforms moving swiftly to restrict sharing of it. The restrictions significantly reduced visibility of the story, and a year and a half later, the platforms have still not offered a full accounting of how they decided to suppress it – thus reinforcing the need for social platform transparency.
Just two weeks before the presidential election in 2020, the New York Post announced a journalistic bombshell: A laptop belonging to the Democratic candidate’s son and containing a wealth of compromising emails, letters, images, and videos had been abandoned at a computer repair store. Of particular relevance to the election were emails allegedly suggestive of questionable business arrangements with foreign nations and allegations of facilitations of access involving Joe Biden, who was vice president when the messages were written.
Rather than rush to report on the story as they did for John Podesta’s emails and the infamous Steele dossier, mainstream media outlets largely ignored the Biden story. When they did cover it, their treatment usually resembled that of Politico, which titled an article: “Hunter Biden story is Russian disinfo, dozens of former intel officials say.” An article in Poynter even cited the lack of mainstream media coverage as an indicator of the report’s lack of credibility.
Poynter also emphasized that social media platforms were moving to restrict sharing of the story on their own, without waiting for a verdict from fact checkers or news outlets. In Poynter’s view, these actions carried unique weight: “When these organizations apply their misinformation rules, that should be a red flag alerting you to give something another look.”
This raises perhaps the most important question: How did social media platforms decide to suppress the Post story until after the election? What was the evidence they used to determine that the story was likely false? At the time, Facebook cited “signals” that the story was false and that the need to “reduce the spread of misinformation” meant that it could not wait until fact checkers had a chance to vet the story. The company has never explained what those signals were. Twitter cited an ever-changing litany of policies in explaining its ban, suggesting a company that decided first to ban the article and then hunted for a rationale to justify its action.
Asked whether the Biden campaign had reached out to them prior to the restrictions, the social media firms remained silent. Asked whether, in light of the Times recent acknowledgment of the story’s accuracy, they would be changing their criteria for restricting the sharing of political stories, the social media firms remained silent. Nor did they respond when asked whether they felt any responsibility for suppressing what has now been verified as accurate reporting. We will likely never know what “signals” Facebook relied on or who at Twitter decided the article shouldn’t be shared.
Across the world, social media platforms are ramping up their interventions in the political system. Last year, during large-scale protests in India, the Modi government pressured Twitter into blocking and hiding hundreds of accounts critical of the government. In Pakistan in 2018, Facebook instructed its moderators to apply special scrutiny to one political party, while treating an opposing party as “benign.” In Myanmar, Facebook’s guidelines permitted praise of an extremist organization that used the platform to incite violence against Muslims – even as it publicly touted a ban on such praise. Facebook’s list of banned “hate” groups includes political parties holding seats in both the European Union parliament and the governments of major European countries.
All of this happens in the shadows. The social media platforms that increasingly act as our digital public squares invisibly curate and cultivate the political discourse and information environment to promote the views with which they agree and silence those that they oppose. As the New York Post saga reminds us, this curation extends even to journalism with relevance to an American presidential election. Unless they are held accountable, the platforms will never be forced to explain their actions, learn from their mistakes, or improve their practices.
As the 2022 midterms approach in an increasingly divided nation and dangerous world, these challenges are ever more pressing. Change is needed. It is time to amend Section 230 of the Telecommunications Act to force transparency on the social media platforms that now wield so much influence – and control – over our public discourse.
RealClear Media Fellow Kalev Leetaru is a senior fellow at the George Washington University Center for Cyber & Homeland Security. His past roles include fellow in residence at Georgetown University’s Edmund A. Walsh School of Foreign Service and member of the World Economic Forum’s Global Agenda Council on the Future of Government.