Facebook Bans Donald Trump for Two Years, but the Discussion on Regulating Free Speech on the Internet is Just Beginning

Facebook Bans Donald Trump for Two Years, but the Discussion on Regulating Free Speech on the Internet is Just Beginning


Photo Credits: Joshua Hoehne (Unspash.com)

Nikita Munjal is the IPilogue Content Manager, an IP Innovation Clinic Fellow, and a third-year JD/MBA Candidate at Osgoode Hall Law School.

 

In January 2021, the then-acting president of the United States, Donald Trump, was banned from Facebook for statements he had made in the immediate aftermath of the violent insurrection which took place at Capitol Hill. Trump’s comments were seemingly the last straw for the social media giant who had repeatedly cited its commitment to upholding free speech in defending its stance on Trump’s use of inflammatory language on the platform.

However, Facebook’s decision in January was not final. When accounts are banned or posts are removed from Facebook or its subsidiary, Instagram, users can appeal the decision to the quasi-judicial body, the Facebook Oversight Board (FOB). Alternatively, Facebook can refer cases to FOB to determine whether its decision had been fair, as was the case here.

FOB’s Decision

In early May 2021, the FOB ruled that Facebook was justified in suspending Trump’s accounts. However, they stated that it was “not appropriate” for Facebook to impose an indefinite suspension, which contravened Facebook’s standard operating procedure. Facebook’s penalties usually pertain “removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account” (at p.1). FOB gave Facebook six months to reexamine its arbitrary penalty and give an appropriate penalty based on the gravity of the violation and the prospects of future harm.

Facebook’s Response

In June 2021, approximately a month after FOB’s decision and within the six-month time period, Facebook released its response: Trump’s suspension from Facebook and Instagram will last for two years, effective from the initial suspension date. However, at the end of the two-year period, Facebook will reassess whether the risk to public safety has receded.

Unsurprisingly, this set of decisions has garnered mixed reactions. Some writers have stated that this is a victory for Trump, who could return to the social media platform in time for a potential 2024 presidential run. Others have argued that this is a victory for Facebook since it could decide whether to continue to the suspension or allow Trump back on its platforms  based on the political landscape at the time while hiding their rationale behind the risk posed to public safety.

Broader Implications for Free Speech on Social Media

Trump’s social media presence during his presidency exacerbated the discourse around regulation and moderation of content posted on social media. Specifically, questions have arisen as to whether corporations or governments are better positioned to regulate content on social media.

Some industry members including FOB member and former prime minister of Denmark, Helle Thorning-Schmidt, are calling for FOB to monitor the industry. These proponents cite its funding, autonomy from Facebook, and diverse membership as reasons for its potential success in regulating the space. However, not everyone agrees with that position. For one, FOB seems powerless in holding Facebook accountable for its role leading up to the insurrection. This is not to suggest that Facebook is the only social media platform grappling with finding an appropriate balance between promoting free speech and preventing harm; however, its role cannot be understated.

Critics argue that FOB’s decisions serve as a distraction by focusing too much on corporate oversight. Instead, the focus should be placed on passing legislation that curtails Big Tech’s business models and protects users from their voraciousness.

Currently, the Canadian federal government is preparing to unveil legislation regulating social media content. The legislation is expected to be modeled after Germany’s NetzDG law, which requires social media platforms to remove illegal content under tight deadlines or face severe fines.

Human rights scholars warn that following Germany’s precedent could be problematic for two reasons. First, it won’t effectively deal with content that is “lawful but awful”, that is, content that is legal but is known to create real-world harm. Given the Charter of Rights and Freedoms’ broad protections for freedom of expression in Canada, it will be difficult for the government to curb the expression of harmful ideas in public spaces. Second, the legislation could set a bad example for countries that criminalize forms of expression protected under international human rights law. Laws that impose severe penalties on social media companies for failing to remove illegal content under a nation’s laws could increase the criminalization of political dissenters and minority communities. To address these concerns, scholars suggest Canada adopt a multilateral approach by working with other rights-respecting democracies to prevent the internet from “splintering into a series of national networks.”

Ultimately, until the federal government unveils the legislation and holds consultations, it is difficult to predict its effectiveness. However, online content requires regulation, whether that be from corporate entities, governments, or something in between.