This is supposed to have been Mark Zuckerberg’s response during a security briefing where he was informed about how Russian officials had successfully infiltrated Facebook during the 2016 US Presidential election.
When the meeting was held in December 2016, no one else knew.
Not even the US Government.
How on Earth did they do it?
But how? How does Russia use social media? Well, this greatly varies between direct and indirect use of social media. Primarily speaking, the main objective of the use of social media in Russian election meddling hinges on the ability of the hackers to sow seeds of chaos into the country in question.
In the US in 2016, this was done by perpetuating the idea among Americans that the system was rigged against them and threats to the institution of democracy lay all around them.
On Facebook, innumerable adverts were bought to spread disinformation and propaganda. In the wake of the election, Facebook estimated that over 126 million Americans had been exposed to Russian propaganda1. Twitter also deleted over 3000 accounts suspected of being bots, the extensive use of which was a defining element of Russian interference. These bots were programmed to like and share disinformation posts but also to follow accounts spreading it. In a climate where voters are divided, it is easier to spread disinformation to pursue a specific outcome; in Russia’s case, the desired outcome was to boost the campaign of Donald Trump. As such, we can observe how pro-Trump bots were far more prolific than pro-Clinton bots, posting four times as many tweets on average but around seven times as much during the final debate2.
Indirect use of social media was built on persistent and opportunistic hacking. The turning point for this was when Russian hackers sent John Podesta -Chair of the Clinton Campaign- a fishing email disguised as a google security notification requesting he change his password. He complied and the resultant leak led to thousands of emails being sent to Wikileaks; eventually, these would be shared extensively on social media by bots and Trump supporters alike. When combined with disinformation, the emails could be portrayed as ‘proof’ for claims which in reality had little or no legitimacy. We all remember how ardent Trump allies would constantly refer to “Clinton’s emails” as they’d religiously chant “lock her up”. In this sense, the emails also served as a club with which Trump and his supporters could endlessly beat Hilary Clinton around the head with thereby weakening her campaign or being perceived as weakening her campaign.
Russian interference across the Globe
However, Russia couldn’t just interfere in the US election straightaway. It had to build up to that. The Centre for Strategic & International Studies noted how Russia first began interfering in the politics of its neighbours and former components of the USSR. Georgia, Hungary, Ukraine were all subject to malign Russian influences since the late 2000s and perhaps since the 1990s in Ukraine. CSIS argues that Ukraine has served as a testing ground for what Russia has done and continues to do the world over and reusing techniques where it can.
Although interference was prominent in Polish, German and French politics, one of the first times social media was utilised for meddling was in the UK during the 2014 Scottish Independence Referendum. A 2020 report by the Intelligence and Security Committee of the UK Parliament also concluded how Russian influence in British domestic politics was commonplace yet the Government did nothing to combat this. The report suggested how Russian bots were used to spread baseless claims the referendum was skewed in favour of the pro-union side3. It also suggests how the Government turned a blind eye to evidence of Russian meddling during the 2016 EU Referendum.
What can countries do to protect against election interference?
The reality is that the problem isn’t going to get better. It’s only going to get worse as countries’ abilities to infiltrate democratic systems become more and more sophisticated.
So what is there to do? For this, we can look at one of the most recent examples of attempted election meddling: the 2020 US Presidential election. Here, efforts were made by Facebook to ensure that what happened in the 2016 election, didn’t happen again. By altering its algorithm, Facebook shielded more users from disinformation campaigns linked to Russia, Iran and China; however, as this did not continue after the election, disinformation was again allowed to encourage furore, contributing to the Storming of the US Capitol in January 2021.
Pascal Podvin has suggested that for social media to be a more secure environment during elections, the onus must fall on social media companies. A more proactive approach in seeking out suspected bots, halting them at the login stage and deactivating the accounts more quickly, will greatly assist in limiting the flow of disinformation4. Unfortunately, this is easier said than done due to the complexity of having multiple rule-dependant search techniques operating in layers. However, Podvin suggests that we reorganise our technique and instead of de-platforming bots, companies should only give a platform to accounts that are definitely human: “guilty until proven innocent”. This, alongside deleting accounts of bots that slip through the net, may provide immeasurable assistance in the fight for election security.