Russia’s invasion of Ukraine represents another key step in the battle against online disinformation

Russia’s invasion of Ukraine marked a new inflection point for social media and the role it plays in the modern news ecosystem. Now, after years of layoffs on the influence of social platformsand how social media trends can spur action in the real world, we’re seeing faster, more responsive approaches to potentially dangerous messages, which have played a key role in limiting the spread of misinformation and cancellation of counter-narratives that have the potential to both erode support and undermine action.

Yet, at the same time, these same changes underscore the importance of social platforms as propaganda tools, and how they can be – and have been – used as a way to increasingly control narratives around political events and cultural.

Which begs the question – is it better to shut down social platforms entirely for certain regions, or does it just give government-controlled media more space to fill those gaps and dictate messages as they see fit? ?

Of course, none of the platforms themselves chose to be cut – both Facebook and Twitter would either be limited or cut off entirely in Russia at present, due to their refusal to comply with Kremlin demands to stop censoring state-affiliated media. But even so, the lack of outside sources of information is likely to have a huge impact on how Russian citizens view the action in Ukraine, with various reports showing that many Russians do indeed support Putin’s decisions, despite near global condemnation. unanimous.

But without outside input, it’s entirely possible that the Russians are simply unaware of the global response – or at least probably have less perception of it. Which is a risk of Russia being cut off – now the only information the Russians have is largely through Kremlin-controlled channels, which can’t be good for broader understanding.

At the same time, the platforms themselves can’t do much about it. The only option they have is to comply and allow their platforms to be used to spread misinformation. Which also leads to the counter-worry – without being able to hear the other side, how do we know we’re getting the whole story? The cancellation of pro-Russian narratives means that the platforms effectively control the flow of information, and whether the suppression of such extends far beyond just social media in this caseit reinforces that the news and information we see can be controlled, or at least influenced, by private organizations.

In most cases, this is in line with broader government sanctions, but it’s still at least something to worry about.

So where are we at now?

Currently, according to the latest reports:

  • Facebook and Twitter have been restricted or shut down in Russia, limiting the ability of Russian users to share their views beyond the country’s borders
  • Russian state media have been banned from Youtube and TikTok for users in Europe, as both now seek to ban downloads in Russia due to the Kremlin new fake news law. Reddit also has prohibited links to Russian state media sites
  • Ads from Russian-based organizations have been banned on Facebook, Twitter, YouTube, Google and Snapchat

Thus, pro-Russian posts are severely restricted on Western social media apps, and each platform continues to work hard to stop the spread of misinformation by tackling it before it can take hold.

The silver lining to this, as noted, is that we’re finally seeing this element taken more seriously, and we’re finally seeing platforms take more definitive and proactive stances on potentially harmful misinformation before it’s too late. Platforms have rejected trends such as the rise of QAnon for too long, despite repeated warnings, preferring to let users exercise their right to free speech and explaining it as this, just talk between groups of niche. We now know where this can lead, and it’s a major positive to see an immediate response to the misinformation in this case, which has played a significant role in limiting its impact.

It’s really what we need, and it sounds like the key learning from the last decade of evolving social media usage.

But again, it also depends on whether there is a definitive truth and whether the platforms themselves decide so quickly. In this case, it’s clear, based on the global response, but it won’t always be that simple.

So, while social platforms are praised for their quick response in this case, and this seems to be an important point in the battle against online disinformation campaigns, it may not be indicative, as such, and that may not limit the next rising, dangerous trend.

Unless a definitive battle plan is drawn up. What we need now is for the platforms to work together, as an industry, in partnership with third-party fact-checkers and other legal and/or academic groups, to consolidate a rapid decision-making process. on potentially dangerous trends, as soon as possible. as they are detected, to ensure an ongoing proactive response to limit them.

It feels like we’re at the next stage of the battle given the current action we’re seeing, but that may not be the case, and it’s important that we recognize the value of the response we are seeing right now, so set future action on the same.

But that inevitably implements a level of scrutiny over “free speech” online – and if we essentially only see posts that have been vetted by a network of fact-checkers, businesses and even aligned groups on government (in the case of regulation) much better than Russia or China channeling their messages through state-affiliated partners?

It appears to be, and you would assume that the power of democracy holds more strength in this regard, in terms of allowing a healthy level of scrutiny. But it’s a question that will inevitably arise as we seek to incorporate lessons learned to improve the flow of information online.

Comments are closed.