The Bigger Battle to Defend Democracy Online

Photo Credit: Palazzochigi. cc-by-nc-sa.

The recent focus on Russia-linked hacking and information operations aimed at the US presidential election has overshadowed another related and important story: governments around the globe are increasingly using these same new digital tactics domestically, often to great effect.

For the big technology companies to truly champion the “don’t be evil” values they strive to embody, it is vital that they also address the manipulation of their platforms by nondemocratic actors aiming to manage public opinion and repress political opposition in their own countries. The companies can best do this by listening to targeted activists, independent journalists, and other in-region experts who understand how platforms are being used (and abused) in different countries, and then harnessing their tremendous internal technical capacities and creativity to implement solutions.

A 2017 paper by Oxford Internet Institute researchers concluded that cyber troops are now a “pervasive and global phenomenon,” citing organized social media manipulation in at least 28 countries. Across these countries, they found that every authoritarian regime has run campaigns targeting their own populations, while only a few have also targeted foreign publics.

Freedom House researchers, in their 2017 Freedom on the Net report, identified 30 countries where “governments are employing armies of ‘opinion shapers’ to spread government views, drive particular agendas, and counter government critics on social media.” For example, members of a special unit within the Sudanese state security service created fake accounts on Facebook and WhatsApp in order to promote government views and denounce critics within popular groups. In Mexico, an estimated 75,000 automated Twitter accounts known as “Peñabots” have worked to drown out criticism of President Enrique Peña Nieto by flooding popular antigovernment hashtags with spam and by artificially promoting alternative hashtags ahead of trending antigovernment ones on Twitter’s top-10 list.

In most cases, the private companies that control today’s digital space can take meaningful steps to responsibly maintain their platforms as enablers of free expression and civic organization. When social-media fueled protests erupted in Syria in early 2011, bots run by a Bahraini company sprang to life on Twitter to flood the #Syria hashtag with pro-regime messages as well as apolitical spam. In response to numerous complaints, Twitter simply restricted the spam to these accounts’ followers, effectively removing it from the feeds of the many citizens sharing news and organizing around the #Syria hashtag. However, despite the recent attention to the problem of automated and fake accounts working in an organized manner to manipulate public debate and opinion, this sprawling problem is still far from solved. An increased commitment to improving detection systems, and to actively removing accounts despite the effect this may have on a company’s all-important user metrics, is part of the answer.

Bad actors are also constantly evolving their techniques. A private firm in Poland told Oxford researcher Robert Gorwa that it has created more than 40,000 unique online identities with accounts across social media platforms. Clients can hire the firm to target Polish opinion leaders with messages in order to influence their understanding of the public’s position on key issues. Due to the difficulty of detecting such sophisticated tactics, companies would be wise to regularly offer independent journalists and researchers meaningful opportunities to raise concerns about how platforms are being used in their region. They are often aware of such activity and more than willing to flag it, when given the chance.

In Belarus, the Facebook account of an opposition leader organizing a “Freedom Day” demonstration last March was hacked and used to send out fake messages discouraging people from attending the demonstration. Google’s recently launched Advanced Protection Program, which offers several extra security protections for especially at-risk users like journalists or activists, is one example of how companies can help defend their services from politically-motivated exploitation.

Returning to the case of Russia, investigative journalists with RBC recently published new details on how the “USA desk” of the now infamous Internet Research Agency swelled threefold in the summer of 2016 at the height of its activities around the US presidential election, reaching 80 to 90 salaried employees. Nonetheless, this still represented only about one-tenth of its entire staff. On the whole, the troll factory’s primary work has been posting comments under Russian news articles and across the Russian-language space of popular social media networks to support Kremlin policies and undermine its critics. For example, after Russian opposition leader Boris Nemtsov was shot in Moscow in February 2015, employees were ordered to leave comments on news stories suggesting that the opposition itself had arranged the murder.

Pro-government forces in Russia, of which the Internet Research Agency is just one part, have launched a wide-ranging digital assault in recent years to undermine political expression and debate online at home, including by manipulating social platforms. In one example, the accounts of various popular Russian journalists on YouTube and Facebook have been repeatedly suspended after community violations were reported en masse to trick these platforms’ partially automated content moderation systems. Companies could address this issue by hiring more staff with relevant language skills and political and cultural knowledge to oversee automated moderation systems or by using a “whitelisting” strategy, in which users likely to be targeted for political reasons can apply to have their accounts marked for manual review before suspension.

While it is difficult to measure the ultimate political impact of online manipulation campaigns, many domestic operations in Russia are almost certainly more impactful, and harmful to democracy there, than were clumsy Russia-linked efforts aimed at US voters, such as the YouTube video bloggers claiming to be from Atlanta who ranted against Hillary Clinton but spoke in thick African accents and referred to LeBron James as the best “basket” player of the year. In general, it makes sense that trolls would operate more effectively on their home turf. This of course means that many of the most successful efforts by nondemocratic actors to manage public opinion and repress political opposition by manipulating digital platforms are taking place in far-flung corners of the world with which US companies are not as familiar.

The global reach and massive user base of key technology companies such as Facebook, Google, and Twitter can certainly make monitoring and countering abuse and exploitation of their online platforms and services a challenging and complicated task. With great power comes great responsibility, however, and now that the big tech companies have found themselves masters of the world’s new public square, it is vital that they continually work to address anti-democratic manipulation of their platforms everywhere, not just in the United States.

Source: Open Democracy

Creative Commons License
Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution 4.0 International License.