How Social Media Sows Confusion

“Social media platforms were designed for engagement and therefore revenue. Unfortunately, the most engaging content is usually controversial and polarizing. This means that a platform’s algorithms often end up contributing to the problem,” said Ali Tehrani, writing for the Forbes Technology Council.

Social media gives universities and researchers alike a platform to share their expertise directly with the public. But with Facebook and Twitter now under increasing pressure to review and remove offensive or inaccurate content, the more your research engages with timely hot-button issues the more likely it is to trigger the algorithm and encounter the shadowy process of content moderation.

In the face of an ever-accumulating avalanche of content across an ever more divisive information landscape, perhaps Facebook should get some credit for even attempting the impossible task of moderating content in the first place. It’s certainly worth a peek behind the curtain to understand how the process actually works.

OK, Computer?

Along with user-submitted reports of content violations, Facebook employs machine learning to catch misinformation or malicious content before it makes it across the desk of a human moderator.

That is, until all the human moderators were gone.

When the pandemic forced Facebook to shutter its offices in March of last year, A.I. became the only line of defense against content violations. Essentially, the algorithm was left to do the work that had previously been performed by nearly 15,000 contract content moderators.

The obvious happened almost immediately – the algorithm started removing legitimate posts – but it’s unlikely this genie will go back in the bottle.

No, Not That Science

It’s easy to forgive bots prone to false positives, but humans are prone to entirely different sets of biases.

In February, the Wall Street Journal posted an opinion piece by a surgeon from Johns Hopkins University in which he suggested that the U.S. could achieve herd immunity from COVID-19 by April. Facebook appended the post with the label, “Missing Context. Independent fact-checkers say this information could mislead people.” So, the WSJ did some digging.

In the WSJ’s rebuttal to Facebook’s decision, their editorial board exposed a larger problem with Facebook’s third-party fact checkers. They identified that it was the opinions of only three other scientists that comprised the fact check and took the opportunity to examine and address in detail each of the issues they raised.

Ultimately, was the publication reckless for publishing the opinion piece? Were Facebook’s fact-checkers too trigger happy? Whatever side you land on, debate among experts in any field surfaces and scrutinizes gray areas, especially in the realm of forecasting. This is precisely the point. Conflicting opinions among experts are expected, but it is important to be mindful of the power wielded by the venue of those debates.

Bus vs. Rent

Granted, the above is a rumination on worst case scenarios.

Most of us likely share information and opinions in good faith across our social networks. However, just as funding agencies require data management plans, communications management should be thought of as a larger programmatic piece of a researcher’s work.

A communications plan is essentially an extension of a data management plan and benefits from the same basic considerations: 1) What is generated? 2) How is it securely handled? and 3) How is it maintained and accessed long-term? To a large degree, the content we post and conversations we have on social media are handled and controlled exclusively by the platform.

To the call for public scholarship, “We need to be on social media” is a go-to response. In light of its issues, some obvious and some opaque, these tools are uniquely powerful in enhancing visibility. Applying the adage never build on rented land, researchers can maintain more control over their message by hosting a website or blog and directing traffic from social media there.

The Comments Section, AKA The Thunderdome

The unique peril associated with social media is the exposure to public comment. Researchers found that simply the tone of comments left on a posted article can influence how new readers perceive the content of the article itself.