*Introducing our new International Fellow:*

*By Gabriel Armas-Cardona*

Measuring the randomness of numbers is a well-developed
field and of vital importance for testing the validity of election data. Multiple
tests have been developed to test for randomness among data sets, including the
last-digit test. These types of tests, including the last-digit test, can be
used to help determine whether election data has been manipulated.

*The Last-Digit Test*

The last-digit test
involves looking at the last, or final, digit of each number and counting how
many 0s, 1s, 2s, …, and 9s there are. If the numbers are random, then each
digit should appear at the end with an equal percentage, 10%. If certain digits
appear much more or much less frequently than 10%, then that difference is
evidence that the numbers are not random and that they’ve been manipulated.

Applying the last-digit test to election data is a standard
method to determine whether the results have been manipulated. Special mention
has to be made to Policy Forum Armenia’s Special Report on the 2012 Parliamentary Election. In that report, Policy Forum
Armenia uses the last-digit test, among others, to demonstrate that the
official 2012 results have been manipulated.

*Applying the Last-Digit Test to the 2013 Presidential Election Results*

Using the
last-digit test for the 2013 election involves examining the turnout results
from every precinct in the election. The turnout at each precinct is not a
random number; if a precinct has a 1,000 eligible voters, one can expect a
turnout between 300-700. What is random is the last digit of the turnout, and Policy Forum's Report lists the theoretical support for this test (see page 25). Looking at the last digit of the turnout for each precinct that has at least 100 voters should create an even distribution, with each
digit appearing 10% of the time (available on http://res.elections.am/images/doc/masnak18.02.13p_en.pdf). At the same time, it has to be mentioned that this is not entirely true for small numbers because of Benford’s Law. To compensate for this deviation, only turnout results of at least three digits, i.e. at least 100 voters, were analyzed.When applying the test to the 2013 precinct data, we don’t get an even
distribution (see graph 1).

Graph 1: Distribution
of the Last Digit of Reported Numbers for Precinct Turnout

Looking at the graph, anyone can see there are some wide
variations from the expected result of 10% for each digit. In particular, we
find that ‘0’ is overrepresented by 2%, ‘5’ is overrepresented by 1% and ‘9’ is
underrepresented by 2.4%. These differences hint at human manipulation as
humans tend to prefer some numbers over others (look at, for example, What the Numbers Say: A Digit-Based Test for Election Fraud, Bernd Beber and Alexandra Scacco (2012). To mathematically test for manipulation of the data, a chi-square analysis of
the actual results compared to the expected results of 10% per digit can show
whether the deviation is significant.

Conducting a chi-square analysis of the precinct turnout
finds that the data is statistically improbable. Conducting a chi-square analysis
comparing the actual outcome with the expected outcome of 10% finds that it’s
statistically improbable that the data is random (n=1899, Chi-square value=23.4, p=.005; statistically significant). This implies that that the data has been manipulated.

This test was repeated after dividing the data
between Yerevan and outside of Yerevan with differing results. When looking at
precincts within Yerevan, the chi-square value is low and it’s plausible the distribution results from randomness (n=467, Chi-square value=9.85, p=.363; not statistically significant.). Looking at precincts outside of Yerevan finds again a statistically improbable
outcome that implies manipulation (n=1432, Chi-square value=19.7, p=.02; statistically significant).

*Conclusion*
Using the last-digit test, it’s
statistically improbable that the data distribution is random, implying the
data has been manipulated. The last-digit of the precinct turnout should have
an even distribution, but instead it has the statistically improbable
distribution shown in graph 1. This evidence of manipulation disappears when
looking only at Yerevan, but it reappears when looking at precincts outside of
Yerevan. This analysis does not prove that the official results are fake, but
it does show that it’s improbable that the results occurred naturally and it is
likely the turnout results are altered to some extent. This first glance
impression perhaps will encourage researchers to dig more and come up with more
relevant findings.

## 11 comments:

Dear CRRC team,

I've tried to reproduce your calculations, and got different results. Here's the link to my spreadsheet. https://docs.google.com/spreadsheet/ccc?key=0AoKRwnlv59GSdEVWeUN6cXB3Mi1Eb2RuZTRUVWZpZkE&usp=sharing

Dear Ruben Muradyan,

Thank you for responding to our post. We’re very happy to have a dialogue with our readers regarding our work. We’re especially happy to have readers challenge our work and make sure that we keep a high caliber in our work.

I stepped through your work, and your methodology was correct. Your “Voters_turnout (no small numbers)” sheet correctly removes small voter turnouts that may skew the expected results, and column H correctly isolates the last digit of the voter turnout, which is then counted in your analysis e.

The discrepancy between the results comes from the source of the data. We were working with the first data set that was available: the PDF that was linked in the blog post with a time stamp of the results as of 8pm on 18 February. In contrast, you used either the final voting results or the results listed in the Armenian spreadsheet that was time stamped as of6am on 19 February (those preliminary results time stamped to 6am have since been removed from the website). There are slight differences between the two. For example, precinct 1/12 has 985 votes in the final turnout, while the linked PDF has 984 votes for precinct 1/12. The total difference between the two datasets is 2834 votes, with some precincts gaining a few votes and a very few precincts losing votes. The difference between the two datasets is minor, but the last digit test is sensitive to those changes.

Even with slightly different data, it’s possible that the conclusion will still hold. Policy Forum Armenia did a similar analysis and got a similar conclusion: the divergence from the expected result is not statistically significant in Yerevan or Gyumri, but the divergence is statistically significant in the regions of Armenia. It would be interesting if you did a chi-squared test to see if your final results could be explained randomly or whether the test suggests some alteration in the data as well.

Dear CRRC team,

Thank you very much for your responce and clarifications. Let me divide my answer to 2 parts.

1. PFA research

PFA is politically affiliated entity, and we have to be very critical to any research they provide, keeping in mind possible bias. Document, you mention in your reply lacks info about initial datasets and methodology, which raises a lot of questions. It lacks tables for graphs, and graphs have very distant gridlines, so it is impossible to get exact values for the results.

As for the Gauss distribution of votes, IMHO it is not applicable for the cases, where free human will acts.

2. Dataset, that was linked to your post and reply is a PRELIMINARY turnout report. It was created and published at ~21:00 on Feb 18, and can, and you have figured out, that it definitely has some wrong data. Dataset, which I've used is a dataset with final and official result of elections. I would like to draw your attention, to the fact, that Beber&Scacco paper (and I'm pretty sure any other research) was dealing with final, and not preliminary dataset.

Anyway, thank you very much for explanation.

http://ekav.info/stop-raffi-hovannisian.html

Thank you for your constructive comments, I appreciate the dialogue.

PFA is undoubtedly a political organization with its own perspective. I also agree that their public statements are often overtly politcal. However, I appreciate the scientific approach they use in their reports. Their analysis of the 2012 parliamentary election is superb.

I did use the preliminary data, but that doesn't make it wrong. The test conducted shouldn't be impacted by such a small change in votes. The exact percentages will change, but it's unlikely the significance of the chi-square test (which was p=.005) will change noticably. If different people use this test on the preliminary and final data and reach the same conclusion, that adds to the significance.

Finally, more statistical tests have come out since this publication. Here are two more analyses in Russian and one in Armenian: http://romanik.livejournal.com/718556.html, http://abuzin.livejournal.com/114160.html, and http://husikghulyan.blogspot.com/2013/03/2013.html.

Perhaps they were good at 2012 (I hadn't followed and double checked their job at that time), but their current (and preliminary) report is not scientific at all. I hope, that they will provide all the necessary data (initial datasets, methodology, final tables, etc.) in it's final version.

When it comes to Gaussian curves, and correlation between turnout and votes for incumbent, this researches means nothing in terms of getting proofs of manipulation. Mostly because election is a process of free human will, and not a mechanical repeating process, where Gauss rules.

But they can HINT, that there was a manipulation. So if I was in RH's shoes, I would prepare standard complains, and file them to the most doubtful precincts, with clear understanding, that there may, or may not be manipulations.

You're absolutely right that these statistical analysis cannot prove manipulation. They also could not prove the rationale behind any possible manipulation, e.g. whether the manipulation is fraudulent or based on some alteration of the results (e.g. rounding vote counts or having reporting floors).

However, these tests can suggest that manipulation has occurred. As you point out, the strength of the result depends on how well the theoretical assumption fits to reality. Precinct vote totals depend on the size of precincts and the interest in voters to vote, neither of these will fit a predictable curve. However, most literature suggests that the second and last digit of the voter turnout should follow certain patterns such as Benford's law and an equal distribution, respectfully. This theoretical support is the underpinnings of the last-digit test, and, so far, there is generally support for the effectiveness of the test. As the field is researched further, it is quite possible that fundamental flaws will be found in the test and the test will have to be discarded.

As for the political ramifications of the results. That is beyond our expertise.

That's why I've focused on last digit test. :)

One more addition to the preliminary vs final results stuff. I've checked with CEC, and found the following:

1. Preliminary turnout results are being reported by phone

2. Voter counts for those results are being calculated by estimates (nor rough, neither precise, based on the manual count of voters) during election day

3. Final results are being calculated on the count of ballots and final protocols, with signatures of all interested ppl (members of electoral commissions, and representatives)

So, perhaps, preliminary turnout results are useful for some kind of research (like counting median of voting time, or overall turnout speed), but (because of human interaction during reporting) is useless for last digit analysis. Moreover, this process clearly explains prevalence of "5" and "0" in your last digit test.

Wow, that is interesting information. The pdf with the preliminary data says the data was "received by electronic manner," but if the data was received over the phone, that could easily introduce errors that impact this test and raise doubt in the conclusion. How did you hear that the different precincts reported the results by phone?

As for the second and third points, I agree with you that there is value to testing the final data, but I'll hold that there is value in testing the preliminary data too. The preliminary data is--for better or worse--more raw than the final data. Raw data, by definition, is not polished and might have more inaccuracies, but then could be free of other things that can effect the data. This, of course, assumes the data are collected in ways that don't introduce systematic error, e.g. not over the phone.

I've asked a friend working there.

But I'm sure, that you may get this info by calling there, and asking for this info, telling, that you are doing a research at/for CRRC. If they will not respond by phone, they must reply to a written request in 10 or 14 days, if I remember the law correctly.

Post a Comment