Putin's bot army - part two: Nashi's online campaign (and undesirable bots)

Part one of 'Putin's bot army' examined what bots are, and how they have been used to simulate online interaction. In part two I explore in detail email correspondence between Nashi activists to demonstrate how they conducted their pro-Putin campaign. Nashi were happy to pay people to simulate online engagement with pro-Putin content. However, it emerges from the correspondence that the higher levels of technical expertise existed at the lower echelons of Nashi's organisational structure, resulting in a tension between superiors who wanted meticulous manual labour, and employees who used automation techniques to take shortcuts when completing tasks. 

Nashi’s use of bots
How is it that pro-Kremlin ‘dead-souls’ populated Russian social media and news outlets at the time of the 2011 and 2012 elections? Some insights come from a series of emails leaked in February 2012. The correspondence between Nashi press-secretary Kristina Potupchik, Nashi founder and then head of Rosmolodezh’ Vasilii Yakemenko, and other Nashi activists revealed the murky details of Nashi’s activity, much of which was online.
The authenticity of the correspondence has not been denied; when pressed Potupchik and Yakemenko have refused to comment on ‘illegal activity’. The emails are convincing, in part due to the sheer quantity of mundane correspondence: among the leaked emails are administrative and security messages from social networking sites, and Potupchik’s personal correspondence with (former) lover Roman Verbitskii. Furthermore, Potupchik’s mail includes draft material for her blog, and (unpublished) diary entries.

When the Nashi commissars speak of bots, they refer to the practice of multiple personalities being controlled by one person, rather than an automated online presence. This may be illustrated by the use of paid commentators on online newspapers. In January 2011 Yakemenko called for the creation of a group of ‘people with balanced language, who write well, not idiots (debily), [who are] capable of maintaining a debate, of developing it. They will comment on our posts, on forums – basically slandering the opposition and praising Putin.... [Creating] the impression that the majority supports us’. It is very clear, Yakemenko wanted ‘live dead-souls’ who could make intelligent pro-Putin comments, not comments automatically generated by bots. By September 2011 a group of 10 human users controlled numerous virtual profiles; their salary was 20,000 to 30,000 roubles ($600-900) a month. The commentators were aided by bots that identified content to be commented: an internal report boasted the ‘creation of a commenting mechanism’ based on the presence of key-words. This ‘mechanism’ was likely a script that alerted commentators when new relevant articles were published; this helped ensure that Nashi comments were among the first to be listed beneath articles. Nashi, then, did use bots, but mainly to aid human manipulators.

In parallel to their attempt to sanitize Putin’s image online by dominating newspapers’ comments sections, Nashi – in a way reminiscent of cuckoos – sought to achieve a platform from which they could disseminate their ideas. Again, there was little room for bots, as fake accounts could be used for spamming, but never to promote the organisation’s real message: ‘on the internet the quality of links is much more important than their quantity. The positioning of information in online journals that have been seen to promote low-quality information ... immediately discredits this information’, noted Potupchik. Consequently, Potupchik organised the creation of a number of ‘friendly’ blogs to provide an alternative ‘launch-pad’ to paid blogs for pro-Putin stories. To achieve this, Nashi formulated instructions for how these blogs should go about gaining an audience: firstly content should be funny. If possible it should feature girls with ‘large breasts willing to pose in underwear’. Ideally the content should have the potential to ‘go viral’.

Nashi’s preferences for human manipulators may be explained by a strong focus on content creation. In March 2011 Potupchik feared losing 'the capital of trust ... we have earned over the last half year’. In January 2011 Nashi decided to move away from placing ready-made material in print media, to paying journalists, editors, and bloggers to themselves write ‘shit (govno) about the opposition’. Simultaneously Nashi would seek out ‘creative’ individuals who would create more attractive content. This is reflected in Nashi’s priorities later in 2011: In September and October there are three expenses (besides the Seliger camp) of one million roubles or more per month listed in Nashi correspondence: Degtiarev, a director of RuTube and creator of viral video clips, was paid to create professional pro-Kremlin content. Tabak, a Nashi commissar who gained notoriety for designing the pro-Putin calendar featuring lightly-clad female students at Moscow State University, was responsible for staging public campaigns, the most notable of which was called ‘Medvedev Girls’. In this campaign female fans of Medvedev’s anti-alcoholism policies stripped at a rate proportionate to Muscovites pouring their beer into a bucket. The purpose of Tabak’s campaigns was to stage activities newspapers would choose to report on; consequently the successes of his projects were measured in the number of articles published.

The third major expense, ‘Kambulov’, may refer to the Russian SEO expert Andrei Kambulov. Unfortunately there are very few emails from Kambulov in the dataset, but a number of forwarded messages allow us a glimpse into his activities. While there is no record of technical details discussed using internet messenger services, it is clear Kambulov was Nashi’s primary provider of automated services (or: he made the bots). He was certainly responsible for establishing or acquiring sets of user accounts for Twitter, Facebook, VKontakte, and LiveJournal. Regarding LiveJournal he at one stage reported that accounts were being set up at a rate of 300-500 per day, and that 16000 accounts would be ready by the end of November 2011. The fact that the bots were not immediately available shows Kambulov somehow created the accounts himself rather than purchasing a package. The big spread (300-500) suggests there was variation in the rate at which accounts were created, possibly due to limitations in the number of proxy servers, or the success rate for passing CAPTCHA tests. Further, it is clear he was charged with populating these accounts with friends, a back-story, etc., and, in the case of Twitter, setting them up to automatically re-tweet content posted by a list of Nashi commissars. His Twitter bots combine chatterbot and macro technology, allowing the bots to automatically comment on tweets mentioning relevant keywords or hashtags. Primarily Nashi needed bots to rig virtual elections and polls, but Kambulov also speaks of writing ‘software’ for Facebook that would ‘place likes, answer comments, post to pages’. It was to Kambulov Nashi turned in October 2011 after LiveJournal began banning bots; Kambulov was able to use scripts to create more convincing accounts. This was done by using content made by ‘rewriters’, in this case four women in St Petersburg who would re-work mundane blog-posts to appear unique (to anti-bot technology searching for duplicated posts) at a rate of 250 posts per person per day. The rewriters thus provided a foil where Nashi users could comment on political posts, while anyone checking for bots would be led to a seemingly genuine (and boring) LiveJournal page.

Kambulov’s services were sought in order to more effectively achieve manual tasks Nashi were already attempting, in particular making Nashi content appear more popular than it was. These tasks are all quite simple, requiring no human-like behaviour. Throughout the dataset there are countless examples of Nashi manually rigging online polls, ‘liking’ YouTube content, etc. One Nashi report lists YouTube clips about Putin in November 2011, detailing that out of the top ten search results for ‘Putin’ four clips were positive, three negative, and three neutral. This had been achieved by positively rating ‘our clips’ in order to ‘be ranked high among Putin queries’. In March 2011, such activity occurred on a much smaller scale, because it was manual. To give but one example (but see also this) a Novaia Gazeta virtual election poll was targeted: ‘of our candidates only Vladimir Putin and Kristina Potupchik are on the list. Let’s support them, not letting the opposition occupy the internet ... you can support our candidates multiple times by using different browsers’.

The call to ‘use different browsers’ refers to the need to disguise one’s IP by a proxy. This is the only automation method regularly used by the higher Nashi echelons. In fact, the leaked correspondence reveals a striking lack of technical expertise within the Nashi hierarchy. Potupchik herself appears well aware of the basic types of automation available, but did not appear to believe full-scale automation could be made credible. This is reflected by the number of individuals in Nashi’s internet projects, as submitted to Medvedev’s administration to participate in a virtual meeting with the president in October 2011, where all representatives were involved in content creation or dissemination, and no one in developing hard to detect automation techniques: Degtiarev presided over a team of at least 14, there were at least 9 commentators on newspaper articles, Tabak brought 52 activists, there were 97 Nashi bloggers, and a further 83 persons associated with Nashi projects such as Otkrytii Internet [1] and Stop Kham. Of these, all except possibly the Otkrytii Internet members, were paid by Nashi, meaning that even before the election more than 150 individuals were paid to promote Putin online. After the election this grew even further, as those involved in the Open Internet project were by February 2011 promoted to coordinate online Twitter activity through Kambulov’s bots.

A problem with paying others to fulfill tasks is that they may prefer to use tools other than those intended. The correspondence makes it clear that the higher levels of technical expertise existed at the lowest levels of the Nashi organisational structure. The few examples of bot activity mentioned are all solicited by middle-ranking activists who seek to use bots to simplify their tasks. For instance Nikolai Sidorkin, involved in the Otkrytii Internet project, contacted a certain Kokoulin to help him advertise the project on VKontakte. This correspondence lists the cost of working around CAPTCHA technology ($1 per 1000 CAPTCHA s), the cost of accounts, as well as their operation. Another example comes from the Nashi led ‘Stop Kham’ project, in which youth were encouraged to film their interventions against dangerous driving and post their videos on YouTube. At one stage six participants were each sent a list of (since deleted) YouTube accounts, and asked to ‘get to work’. The accounts take the format username:password:secret question, where both password and secret question consist of randomly generated characters. There is no mention of macro technology, but given the format of the files precisely matches the mode detailed above, I consider it very likely they were used both to create the accounts, and to promote the videos.

I included the examples above to demonstrate that many operators had a set of tools they knew to be more effective and sophisticated than those provided by Nashi. Automation, when implemented, tended to be used by individuals, not to promote Nashi, but to defraud their superiors. A portion of the leaked correspondence, relating to a campaign in Yekaterinburg, includes exchanges between two lower-level actors hoping to con the self-styled internet hit-man Platon Mamatov by ‘acting as if we really have 10 people working for us’. Mamatov, who had been hired to by a third party to coordinate the campaign, later went on to conduct similar online interventions elsewhere. Mamatov himself used a ‘chatterbot’ during an acrimonious LiveJournal dispute, demonstrating he had an advanced set of scripts at his disposal. A second clear example comes from January 2012. Nashi Commissar Artem Lazarev was responsible for the dissemination of content created by Tabak and Degtiarev in social media. One of his group leaders, Akhmed Akhmedov, who was listed as part of Tabak’s team and previously involved in the Medvedev Girls campaign, was responsible for placing content in VKontakte. Akhmedov in his early reports claimed to have placed content to the walls of thousands of VKontakte groups. His use of bots was sloppy: many different accounts were used, but some were used in unreasonably rapid succession. This suggests a script without login automation, consisting of URLs and links to be posted. Lazarev was unimpressed by this work: ‘in the special groups’ reports there are large quantities of inaccuracies, inflated numbers [nakrutki] and possible falsifications’. He also noted that a number of videos had more ‘likes’ than views. It is unclear, however, whether Lazarev informed his superiors of this bot usage – it may also have been in his interest to show inflated numbers.

Section three investigates how easy it is to track down Nashi's bots using freely available software

Picture source for Putin as robot.
____________________________________
[1] The Otkrytii Internet project is interesting. It was clearly set up by Nashi, towards the end of 2011. In January 2012 the project claimed it was a new ‘Digital party’. On their LiveJournal feed Otkrytii internet claim to have no relation with United Russia or Nashi. For whatever reason the blog has been dormant since May 2012. 

2 comments:

  1. Very interesting so far. Can't wait to read pt. 2 - now!

    ReplyDelete
  2. With Rotate4All's paid to promote network, you can get dollars for simply promoting your PTP page or by surfing other URLs.

    ReplyDelete