Putin’s bot army - part one: a bit about bots

In this first of a three-part series on how the pro-Kremlin youth group Nashi simulated popular support for Putin around the 2011 and 2012 elections, I briefly introduce what bots are and how they are used in a political context.

After the December Duma elections the Russian blogosphere saw unprecedented political activity that was widely hailed as marking the rebirth of Russian civil society, and indeed online activists engaged more with this set of protests than ever before [1], but much of the activity was in fact generated by a large number of ‘bots’. A series of emails leaked by the Russian branch of the Anonymous group revealed that the pro-Kremlin youth organization Nashi engaged in extensive online propaganda. In a recent edition of the Memory at War newsletter Julie Fedor and Galina Nikiporets-Takigawa asked ‘how can we detect the Russian Internet’s digital “dead souls”’? I argue that bot technology is sufficiently advanced for the question to be turned around: how is it that we can detect the ‘dead souls’ at all? I argue we can locate them because Nashi focused on content creation and dissemination, while technical details were outsourced to less ideologically motivated outsiders. These individuals were motivated by personal gain, rather than by convincingly simulating mass online support for Putin.
Although Putin was re-elected, Nashi has since 2012 Nashi been sidelined, presumably at least in part due to the botched campaign. The new 'All-Russia People's Front' may replace either Nashi or United Russia. None of the high-ranking Nashi hierarchy exposed by the Anonymous leakes figure prominently in the new organisation, although former Nashi leader Vasilii Iakemenko may be in the process of founding a new organisation, sincerely yours, intended to represent the 'creative class'. 

There are four principal options for completing repetitive tasks on a computer:
1)      do it manually
2)      buy software to simplify the task
3)      write your own software to simplify the task
4)      forget simplifying the task, and pay someone else to do it

Presumably the Kremlin would have liked to use option 1 – that people who supported the regime express their opinions online, or that pro-Kremlin youth groups, such as Nashi, could control the online space. The email leaks make it clear Putin lacked such voluntary support,[2] and that in the online campaign options 2,3, and 4 were used to varying degrees. The options selected by Nashi will be discussed in part 2.

A bit about bots

The term ‘bot’, derived from ‘robot’, refers to software applications or scripts designed to perform simple mechanical tasks online that may be valuable if performed in large quantities. Bots are virtually omnipresent online: Facebook and Google use them to personalize advertising, while the English language Wikipedia currently runs 1618 bots to check citations, detect vandalism, personal attacks, etc. Bot activity may also be tailored to exploit security weaknesses either to generate revenue, or to perform attacks by overloading a system’s resources. In recent years there has been an increase in politically motivated bot usage. During protests in Syria spambots were used to flood popular Twitter hashtags. Commenting on similar anti-opposition efforts in Tibet, the security blogger Brian Krebs observed that ‘Twitter bots are being used as a tool to suppress political dissent’, as a ‘flood of meaningless tweets’ are directed at protest hashtags. Perhaps the most widely known form of malicious activity are so-called DDoS (Distributed Denial of Service) attacks. These work by a large number of computers sending requests to a target server, overloading it, and making it inaccessible to normal users. Such attacks can either be coordinated by a group of individuals or by a botnet. ‘Botnet’ refers to a series of computers connected by a virus allowing internet criminals to access them remotely. The botnet controller, often called a ‘botmaster’, may use the network to unleash DDoS attacks.

Russia has become a hub for both spammers and botmasters. In 2012 botmaster Georgii Avanesov was jailed for four years. His Bredolab botnet consisted of 30 million hijacked computers, and sent out 3 billion spam mails a day. In September 2010 analysts noted that 1/3 of spam messages originated from .ru domains, another study listed 7 of the world’s top 10 spammers as Russian, and in July 2012 when the ‘Grum’ botnet was taken down world-wide spam levels declined by 50%. As of 2012 36% of cyber-crimes were committed by ‘Russian-speaking hackers’, and the Russian sector of the ‘cyber-crime market’ earned $4.5 billion in 2011. Studies have shown how the Russian internet is expanding rapidly, that Russians spend more time on social networking sites than any other nation, and that Russia has the fastest growing number Facebook and Twitter users, but this must be offset against a whopping 19.2% of Russian social media accounts and blogs being automated.

Buy software (or: bots as a service)

In Russia on the eve of the December 5 elections attacks were directed against political targets, primarily independent media and electoral monitoring sites. These included LiveJournalEcho of Moscow, Novaia Gazeta, New Times, and Golos.org. These attacks probably originated from two botmasters; who hired their services is anyone’s guess.

While a botnet might be hired to conduct attacks, there is another trend where a series of advanced, fully-automated scripts are made available as a service. So-called Search Engine Optimisation (SEO) companies aim to promote a company, site, or blog online. SEOs target social media sites, Twitter, and aim to circumvent Google’s page ranking system to raise companies’ profiles. The services widely offered include ‘Twitter Management Services – Followers, Multiple Accounts, Posting & More’, along with ‘Facebook Management: Friends, Fans, Likes & More’. It is services such as these which have led to a massive upsurge in fake social networking accounts. It is easy to see how these services have political applications, be that for inflating the number of ‘likes’ for campaign content, increasing the members of a candidate’s group or fan page, or users ‘attending’ a political event.

Write software (or: automated bots)

Bots come in varying levels of sophistication, from basic macros, up to artificial intelligence in computer games: 2012 saw the first bots that succeeded in emulating human behaviour to the degree of being indistinguishable from computer gamers. Virtually anyone can run simple automation online by use of a script to perform a series of predesignated actions. In my research I have saved hours of tedious work by using scripts for data collection The simplest way to do this is by using a macro, a form of script that is operated through an internet browser. The user completes the task once, the macro records the sequence, and can then be set to repeat it an indefinite number of times. iMacro, a browser add-on boasting millions of users, is specifically marketed as able to automatically fill in forms (such as account registration) and user details. Typically macros will work by running through loops – a fundamental concept in computer programming and statistics – where one or more variables are changed each time the script is run, producing a similar but distinct result. Loops can be made very powerful by feeding data organized according to a clear pattern, for instance passwords, usernames, and content to be posted could be stored in separate columns in an excel sheet or a .csv file, then automatically entered in online forms. Hence macros can be equipped with data to automatically register accounts online or post messages to forums. Such scripts are widely available on the Russian language internet, including guides to iMacro with .csv extraction, scripts for adding friends and posting spam in VKontakte. These scripts can often just be copied and pasted, meaning anyone can access these methods.

Sophisticated bots can interact with the page content, e.g. by reacting to keywords in news feeds. Thus a bot on a social networking site might be programmed to automatically post a comment to any post mentioning ‘Putin’, or ‘Medvedev’. A more sophisticated version of this is the chatterbot. Some search engines such as Ask Jeeves or the Word paperclip used chatterbot technology to create a friendly user interface. It works by invoking a list of keywords to guess which of a list of responses is most appropriate. Chatterbots are integral to some types of online scams. Robert Epstein documented how a chatterbot posing as a Russian girl on a dating site had fooled him for a number of months. He got suspicious when the bot failed to respond to direct questions, and finally exposed it when it replied to a nonsense email with ‘a long letter about her mother’, but the episode illustrates how sophisticated these bots can be.

Measures implemented to curb bot-activity include banning IP addresses associated with spammers, only allowing a certain number of posts or information requests per hour, requiring a ‘CAPTCHA' – usually a pop-up requiring the user to type visually distorted words  be filled in before accessing content, etc. However, most scripts can be adapted to work around these measures: there are numerous walkthroughs of how use a script to cycle through proxy servers, ensuring each post is traced to a different IP address. Likewise, CAPTCHAs can frequently be solved using OCR (Optical Character Recognition) software. For instance, in 2008 Google, Microsoft, and Yahoo’s CAPTCHA’s were cracked in quick succession; the ‘researcher’ who cracked Yahoo’s CAPTCHA explained that 'an accuracy of 15% is enough when the attacker is able to run 100,000 tries per day.' Again, there are numerous guides to CAPTCHA circumvention on Russian language sites.

Forget simplifying the task (or: pay for manual manipulation)

CAPTCHA entry is one example where fully-automated and human labour intensive methods co-exist. A market in ‘data entry’, mainly run out of India, Bangladesh, and China, caters to those unwilling or unable to use scripts; as of 2011 the going rate for 1000 CAPTCHAs was between $0.80 and $1.20. Since 2008 scholars have monitored review spam on sites such as Amazon.com, showing fake reviews to be common (Jindal and Liu). Groups of reviewers cooperated to promote products, and that these reviews were hard to identify through analysis of linguistic patterns or by identifying IP-addresses (Mukherjee et.al.) A business in fake reviews has emerged also in Russia, as will be confirmed by a visit to Freelancer.ru, where a constant stream of ads seekspeople to perform Russian language ‘data entry’ online. These individuals undertake ‘crowdturfing’ campaigns where ‘a significant number of users obtain financial compensation in exchange for performing “tasks” that go against accepted user policies’. The task undertaken might be to ‘post a single (fake) positive restaurant review online.’ Workers submit evidence of their work, e.g. ‘screenshots of or URLs pointing to fake reviews’ (Wang, p2). Similarly scholars have identified a trend in manipulating YouTube video ratings and view counts (Benevenuto), online ballot stuffing, and account registrations, to give the semblance of popularity to ‘fledgling websites and online games’ (Wang, pp 1-4). It did not take long for political technologists to adapt these methods to their needs, as evidenced by this ad from freelancer.ru: we seek ‘specialists in creating PR campaigns in social networks. PR campaigns directed at political topics in one city.’

Section two will pick up the narrative by exploring how the email correspondence leaked by the Russian Anonymous Group reveals the details of Nashi-led pro-Putin campaign online. 

Part three shows an approach to detecting bots




1) e.g. Fredheim, R: Quantifying Polarisation in Coverage of the Russian protests (forthcoming)
2) At one stage Potupchik expressed fears the Kremlin might become ‘totally [okonchatel’no] marginalised on the internet’ Potupchik, K. (30.03.2011). 'Fwd: spravka o klinicheskikh idiotakh'. slivmail.com.

Benevenuto, F., Rodrigues, T., et al.: 'Detecting spammers and content promoters in online video social networks'. In, 2009.  Year), 620-627.
Jindal, N. and Liu, B.: 'Opinion spam and analysis'. In, 2008.  Year), 219-230.
Mukherjee, A., Liu, B., et al. 'Spotting Fake Reviewer Groups in Consumer Reviews'.
Roberts, H. and Etling, B. (08.12.2011). 'Coordinated DDoS Attack During Russian Duma Elections'. Internet & Democracy Blog.
Wang, G., Wilson, C., et al. (2011). 'Serf and Turf: Crowdturfing for Fun and Profit'. Proceedings of the International World Wide Web Conference.

4 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. I am in fact grateful to the owner of this web site who has shared this great paragraph at here.


    date sites

    ReplyDelete
  3. This is an awesome post. Really very informative and creative contents. These concept is a good way to enhance my knowledge. I like it and help me to develop my knowledge very well. Thank you for this brief explanation .. Well, got a good knowledge.
    Big Data Analytics Training in Chennai | Python Training in Chennai

    ReplyDelete