Image source: 123RF
Last August, five climate scientists were tasked with fact-checking an article on climate change by Climate Feedback, a global network of academics sorting fact from fiction. Under the lens was a piece titled “The Great Failure of the Climatic Model” that appeared in the Washington Examiner. It was gaining popularity on Facebook, which outsourced the fact-checking.
Most reviewers at Climate Feedback, which is one of many organizations that Facebook turns to for fact-checking, termed the article as biased, misleading and low on credibility. You can read a complete analysis of the article on the Climate Feedback website.
This assessment of the article went to Facebook who promptly labeled it as false, the effect of which was to make people reading it aware that fact-checkers weren’t impressed. All good till now as readers did consume the content in full knowledge that it could have dubious data. But, here is when the script changed as Facebook re-labeled the article as “Opinion Column”.
The implication being twin-fold: (a) that climate change was now limited to a matter of opinion on Facebook and (b) that opinions do not need data or facts as the case may be.
So, how does this Fact Checking mechanism work?
For starters, Facebook uses signals such as user flags to identify potential misinformation. Then it works with fact-checking partners, though we aren’t sure whether it takes up the task when a single user flags an article or kickstarts the process only after a predefined number does so.
Be that as it may, these fact checkers are certified as accurate by the International Fact Checking Network (IFCN), a body founded in 2015 to fact-check fact-checkers. Someone has to determine whether the fact-checkers are capable of checking facts, right? We hear that IFCN has a rigorous process of doing so, and for now let’s just assume that they’re doing their job well.
Once these certified fact-checkers review a piece of content, they submit it and based on their assessment it is marked “True”, “False” or “Partly False” or “False Headline” among others. This label then surfaces on Facebook and limits distribution of the latter three, with the caveat that repeat offenders could face a fall in overall viewership and end up losing monetization options.
Now, let’s come to Facebook’s subversion capabilities
The article that is referenced above was written by CK Michaels and Caleb Stewart Rossiter – Executive Directors at CO2 Coalition, which incidentally was also founded in 2015, as a climate change denial non-profit organization. In other words, it advocates that climate change is all bunkum, which isn’t surprising as it is funded by the fossil fuel industry in particular.
So, when Facebook downgraded their post, the authors wrote claiming that the fact-checkers were biased. Which is when Facebook decided to re-label the post as an “Opinion” article. This new policy shift conveniently circumvented what Mark Zuckerberg had promised upon facing the wrath of US senators and others over misinformation on the platform.
So, going forward Facebook could safely argue that since content is an opinion, it needn’t have a fact-check. Of course, that still begs an important question: “By taking this step, isn’t Facebook actually over-ruling its own fact-checkers? Imagine the third umpire over-ruling all decisions made by the on-field umpires!
So, what does the future hold for data-based facts?
At this moment, it appears as though Zuckerberg made a sucker of his critics. Fact-checking had come on board after Facebook faced criticism over false information during the last Presidential election in the United States. Today, several major spenders such as Coke, Unilever, Starbucks, Microsoft and Verizon have pulled their ads off Facebook to protest misinformation and hate speeches. But, Zuckerberg is adamant that they’d be back soon and he just needs to wait it out.
Of course, there is a logistical problem that given the high volume of content, Facebook could at best fact check a miniscule part. Posting fake news is quick and dirty because once you post, the maximum results happen in the immediate future – that is before it is reviewed and highlighted as false or partially false. The damage is already done.
Also, the entire process works as a binary. Those pieces that have been fact-checked have a label while a majority do not, which makes users assume that they’re alright. Researchers at the Institute for Operations Research and the Management Sciences are calling this “the Implied truth effect”.
Which means that Facebook has effectively given fact-checking an unannounced burial and organizations such as the CO2 Coalition could continue posting Opinions with impunity.
Maybe, even the users do not care about data. Only academics do.