(Natural News) Big Tech’s major changes seem to have been made with a Democratic Party nominee Joe Biden presidency in mind. Four major firms have made a grand total of 26 policy changes since the start of 2020, aimed explicitly at election-related content.
(Article by Corinne Weaver and Heather Moon republished from NewsBusters.org)
“I doubt I would be here if it weren’t for social media, to be honest with you,” said President Donald Trump to Fox Business host Maria Bartiromo in 2017. CNN commentator Van Jones dubbed Trump “the social media president.”
Major news outlets have also claimed over the last four years that Big Tech companies like Facebook and Twitter helped President Donald Trump win the 2016 election. Social media companies have set out to prevent that.
Out of the 100 new policies introduced in 2020, 26 policies were touted as election-related. Some of these policies include Twitter’s attempt to disable simple retweeting of content; Facebook’s plan to shut down political ads after the election; and YouTube’s removal of content aimed at content that would potentially reduce voter participation.
The consequences have been more censorship. The New York Post has been locked out of its Twitter account since Oct. 14 for releasing a story about Hunter Biden’s reported emails. Trump and his campaign have been censored 65 times, while Biden and his campaign have received no such discipline. Republicans have taken notice: a Senate hearing will be convened on Oct. 28 to discuss whether Section 230 of the Communications Decency Act has enabled this kind of censorship. On Oct. 26, Twitter pinned two warnings to the top of the newsfeed for each user: one said that election results may be delayed, and the other said that there was misinformation about mail-in voting.
“[T]he timing cannot be pure coincidence,” writes Senior Writer Will Oremus at tech magazine OneZero. He explained that some of the changes Facebook has made over the past year “position Facebook to better defend itself against a potential president and party that has raised the specter of regulations that could seriously damage its business.”
The Media Research Center analyzed all of the policy changes announced from Google, YouTube, Twitter, and Facebook since Jan. 1, 2020, up until Oct. 22, 2020. Some of these changes included restrictions on the QAnon movement, expanding the definition of hateful conduct for Twitter to include age and disease, and enforcements against “manipulated media.”
“The fact that I have such power in terms of numbers with Facebook, Twitter, Instagram, etc,” Trump said the day after his election in 2016, “I think it helped me win all of these races where they’re spending much more money than I spent.”
In response, Twitter made several major changes to how its platform worked during the next presidential election year. Twitter updated its policy on Hacked Materials on Oct. 15, after the New York Post controversy, where Twitter blocked two stories from the Post and suspended accounts who had retweeted those stories. But in the meantime, several accounts had been suspended for sharing the story, including White House Press Secretary Kayleigh McEnany, the Trump Campaign, NewsBusters Managing Editor Curtis Houck and even the House Judiciary (a part of the United States government) for sharing the story.
Twitter’s Legal Policy and Trust & Safety Lead Vijaya Gadde wrote, “We believe that labeling Tweets and empowering people to assess content for themselves better serves the public interest and public conversation. The Hacked Material Policy is being updated to reflect these new enforcement capabilities.” So rather than block the tweets altogether, Twitter decided to label them instead. However, as of Oct. 26, the Post is still blocked from the platform.
In addition, the platform has said it is preparing to not talk about the 2020 election results. In an Oct. 9 update, Twitter’s Legal Policy and Trust & Safety Lead Vijaya Gadde wrote, “[W]e will label Tweets that falsely claim a win for any candidate and will remove Tweets that encourage violence or call for people to interfere with election results or the smooth operation of polling places.”
This policy prevents even candidates from saying that they won an election, instead relying on someone else to “authoritatively” call the results. But no details were given about what kind of authority Twitter will rely on. Probably not the Post.
“It would be silly for us not to change Twitter,” said Twitter CEO Jack Dorsey to The New York Times in August, 2020.
Twitter announced on Sept. 10 that it will “label or remove false or misleading information intended to undermine public confidence in an election or other civic process.” However, the zealous content removal process takes down even accurate information about the election.
Not to mention the policies seem to interfere with one candidate’s ability to speak freely. Trump was censored 64 times on Twitter.
In 2019, Twitter announced that it would be banning all political ads. CEO Jack Dorsey explained, “A political message earns reach when people decide to follow an account or retweet. Paying for reach removes that decision, forcing highly optimized and targeted political messages on people. We believe this decision should not be compromised by money.” However, later, the company caved to criticism from Sen. Elizabeth Warren (D-MA) and allowed some political ads to be run on the platform.
“Facebook, Twitter and Google played a far deeper role in Donald Trump’s presidential campaign than has previously been disclosed,” wrote Politico in 2017. Facebook’s virtual and augmented reality division head, Andrew Bosworth, said that Trump “ran the single best digital ad campaign I’ve ever seen from any advertiser.”
The policy changes made by Facebook since the start of 2020 seem geared at making sure that doesn’t happen again.
Facebook CEO Mark Zuckerberg made the announcement in October 2019 that the company was preparing to “continue to stand for free expression.” However, this promise seems to have gone out the window.
Facebook bragged about how much content it removed for “violating our voter interference policies.” Overall, the company has made 24 policy changes in the past year, with five of those changes geared at election content. These changes were made primarily at the request of the former American Civil Liberties Union director Laura Murphy. In a post to Facebook’s blog on December 18, COO Sheryl Sandberg wrote that Facebook talked to over 90 “civil rights organizations” and asked Murphy to “guide the audit.” These organizations were even invited into the election war room.
In a later Facebook post, Zuckerberg wrote, “Many of the changes we’re announcing today come directly from feedback from the civil rights community and reflect months of work with our civil rights auditors.” These changes included attaching labels to any posts that discussed voting, more censorship of posts that could be “potentially dangerous,” and labeling content from political leaders like Trump that is “newsworthy” but still violates policy.
In a Oct. 7 blog, the company revealed that it had removed 120,000 pieces of content from Facebook and Instagram, its sister company, for allegedly violating election policies. A new policy was given: “We also won’t allow ads with content that seeks to delegitimize the outcome of an election.” Any posts with public concerns about voter fraud would be taken down.
The platform also has made plans for what will happen after the election.
“[I]f a candidate or party declares premature victory before a race is called by major media outlets, we will add more specific information in the notifications that counting is still in progress and no winner has been determined,” said the Oct. 7 post. Facebook head of global affairs Nick Clegg elaborated further that there would be “pretty exceptional measures to significantly restrict the circulation of content on our platform.”
In preparation, Facebook declared that political ads would be shut down after the election. “The social media giant also said it would remove calls for people to watch the polls when those posts use militaristic or intimidating language. Executives said the policy applies to anyone, including President Trump and other officials,” said The Washington Post.
Facebook also announced the removal of “QAnon and Militarized Social Movements” in August, updated later to reflect more action taken on QAnon accounts. However, the company chooses what it enforces, having left several Antifa and black militia pages up on the platform.
The company furthermore put in place a Climate Science Information Center “to connect people with science-based information” on Sept. 14. Part of this initiative also included “tackling climate misinformation.” This change came after several Democratic Party politicians complained that there were “reports that Facebook is exempting climate change misinformation from fact-checking.”
Google and YouTube
A Project Veritas leaked video showed Google Responsible Innovation Head Jen Gennai discussing how Google played a role in 2016, and the role she foresaw the company playing in 2020. “We all got screwed over in 2016,” she said. “The people got screwed over, the news media got screwed over so we’re rapidly been like, what happened there and how do we prevent it from happening again?”
Google and its sister company, YouTube, rolled out a collective 55 policy changes this year. Ten of them were related to the 2020 election. “[W]e’re continuing to raise up authoritative voices and reduce harmful misinformation,” wrote YouTube in a Sept. 24 blog post. The company stated that it would continue to remove misleading information and demonetize content that has “claims that could significantly undermine participation or trust in an electoral or democratic process.”
Google was making preparations for the election as far back as 2019, with a ban on microtargeted ads. “It’s against our policies for any advertiser to make a false claim—whether it’s a claim about the price of a chair or a claim that you can vote by text message, that election day is postponed, or that a candidate has died,” the company also added.
YouTube admitted in September that its automated removal system was too zealous. In response, the company was bringing back “trained human evaluators.” This announcement came after the platform stated that it had removed “the most videos we’ve ever removed in a single quarter.”
Read more at: NewsBusters.org