Fact Check: Did Social Media Really Prevent Election Misinformation?
Prior to the actual day of election, social media giants Facebook, Twitter, and YouTube promised to curb election misinformation.
This includes unsubstantiated charges of fraud and premature declarations of winning by candidates.
These social media platforms delivered what they promised, with a few slips.
However, critics said that their platforms did not really address the problem of this year's presidential election.
An assistant professor of journalism and media at the University of North Carolina, Shannon McGregor, said that the public is seeing what were expected, which was not enough, particularly in Facebook.
Facebook is imposing certain measures to prevent the curb of misinformation such as letting users turn of all political ads on both the social network's main site and its photo sharing service Instagram ahead of the election period.
CNET also reported that Facebook will also label posts from politicians who declare a premature victory.
Nick Clegg, Facebook vice president for global affairs, said that Facebook has made used of artificial intelligence to delete tons of posts and fake accounts, according to a The Verge report.
Clegg noted that the platform partnered with 70 media outlets, five in France, to verify information.
However, despite these measures, misinformation about Democrats' nominees Joe Biden and Kamala Harris have slipped in.
Several memes are being circulated on Harris' position on abortion, which are false.
One meme in a recent report said that there is a crying newborn next to images of Biden and Harris with a Spanish caption saying, "these candidates support an abortion 5 minutes before birth and if it survives the abortion, they approve of killing the baby."
These memes are specifically being set on Facebook and its messaging app WhatsApp.
These misinformation materials targeted Latino voters.
The meme was spread across Facebook pages and public groups, according to an NBC News report.
Did Facebook stop misinformation? Not entirely.
Like Facebook, Twitter banned political ads, which was considered to be one of the toughest moves taken by the company.
The social media platform also vowed to delete tweets that violate its policeis.
It can also add labels to tweet with misleading information, including politicians announcing early victory.
Twitter has proven its flagging measure against U.S. President Donald Trump's tweets.
Earlier, Trump was flagged by Twitter after violating its rules against spreading coronavirus misinformation.
The controversial tweet said claims that he has been immune and cannot spread the virus anymore, according to The Verge.
Twitter also labelled Trump's post, in which he claimed that his political opponents were trying to steal the election.
Twitter labeled the tweet as potentially misleading and hid it from immediate view, according to a CNET report.
Twitter was a little bit more proactive the Facebook.
However, social media expert, Jennifer Grygiel, said that it was not that effective.
Grygiel explained that this is because tweets from major figures can get almost instant engagement.
"When a tweet hits the wire, essentially, it goes public. It already brings this full force of impact of market reaction," Grygiel was quoted in a report.
Like Facebook and Twitter, Google banned political ads temporarily.
The video service platform also banned some videos promoting false conspiracies.
However, YouTube says a video claiming Trump won the election does not violate any of its policies.
YouTube even allowed it to stay on the platform even though the counting are not yet done, according to The Verge.
Report said that compared to the two, there is a huge difference on how YouTube is handling the election.
Subscribe to Latin Post!
Sign up for our free newsletter for the Latest coverage!