What do corruption indices tell us?

Tatyana Deryugina
6 min readApr 5, 2024

--

This post is a little more academic than usual, but I this it should still be interesting to a broad audience. It’s motivated by a persistent concern about corruption in Ukraine and the widespread perception that it is high. As a result, I was motivated to take a statistical dive into a prominent measure of corruption: the Corruption Perceptions Index (CPI), published by Transparency International.

Corruption, defined as “dishonest or fraudulent conduct by those in power”, is corrosive and surely counterproductive to growth, so measuring and working to eliminate it is of paramount importance. Unfortunately, measuring “grand corruption”-conducted by those in higher levels of government, such as judges and elected officials-is incredibly hard. One can measure “petty corruption” (e.g., bribing a police officer to not give you a ticket) by surveying people about their actual experience. But few people have direct experience with grand corruption, and their perceptions could in some cases provide a misleading picture about where corruption is headed. For example, perceptions of grand corruption could increase when government officials are arrested because the corruption is brought to light. Similarly, greater media freedom could increase the discussion of grand corruption in popular press but also make people think that corruption has increased.

An alternative to asking ordinary people about their perceptions is to ask experts or relevant stakeholders who may be more likely to have direct experience or direct observation of grand corruption (e.g., businesspeople). Perhaps this gets you a more accurate measure on a per-capita basis, but the number of people you’re basing your assessments on is necessarily much smaller. For example, some components of the CPI are based on the assessment of just two experts per country.

But today’s post is more quantitative than qualitative. I dug into the hood of the CPI to see how robust its rankings are. To summarize the CPI briefly, each country’s score is based on averaging scores from several other sources that rank corruption (you can download the full methodology at the bottom of this page). Since 2016, Transparency International used 13 sources each year (in 2012–2015, the number was 12). A country has to have at least three sources to get a ranking. The total score ranges from 0 to 100, although as the 2023 distribution below shows, no country is perfect.

Figure 1. CPI score distribution in 2023

Not all sources report on the same scale, of course, so Transparency International rescales the scores to make the mean value of the standardized dataset 45 and the standard deviation 20. But the raw scales of some CPI components are very coarse. For example, the Economist Intelligence Unit (EIU) had just 5 unique values in 2023 (rescaled to be 20, 37, 55, 72, and 90), while Global Insights Country Risk Ratings (GICRR) had 7 (rescaled to be 10, 22, 35, 47, 59, 71, and 83). GICRR is also the most comprehensive database, covering all 180 countries included in the CPI (EIU covers 131). This made me wonder how much sway such coarse sources had over the ranking as a whole: moving up or down a notch on the coarse EIU scale surely makes a bigger difference than a one-rank change on a scale with 20 possible values.

To probe the robustness of these rankings, I performed a set of simple counterfactual simulations: (1) take one country at a time and one CPI component at a time, and improve that country’s score on that scale by one notch (e.g., moving from 20 to 37 on the EIU scale), and (2) re-calculate its overall CPI score and rank. In principle, this yields 13 sources*180 countries*12 years = over 28,000 combinations, but of course not every country is covered by every source. Additionally, I skipped cases where a country already has the highest possible score.*

The overall distribution of change in the rankings across 10,935 valid counterfactual scenarios is shown below. In slightly more than a quarter of the cases, the country’s ranking doesn’t change at all, which is good. In more than half of the cases, it doesn’t change by more than 2, which in the grand scheme of things is also fine. But there is a clear thick tail: about 25% of the scenarios exhibit changes of 5 or more, and in 7% of the cases, the ranking changes by 10 or more (recall that 180 countries are covered by the CPI).

What’s driving the tail? As you may suspect, it is the components with few unique values that have been scaled up to better fit the 0–100 scale. Below is the distribution for the Economist Intelligence Unit counterfactuals: in almost no cases does the country’s rank not change, and the average change is -7. GICRR (next figure) looks a little better than EIU, but still pretty bad: the average change in ranking is over -5.

Figure 2. Counterfactuals for the Economist Intelligence Unit ratings
Figure 3. Counterfactuals for the Global Insights Country Risk Ratings

My next step was to see how much this coarseness matters for Ukraine’s ranking. I took the EIU and the GICRR out of the index entirely and recalculated all countries’ rankings without them. In principle, this could help or hurt any given country. In Ukraine’s case, using the remaining sources would have made its ranking slightly worse in 2013–2015. But starting in 2016, these components contribute negatively to Ukraine’s ranking on the CPI scale. In 2023, Ukraine ranked 104 th(a huge improvement since starting at 144 in 2012, by the way!) on the actual CPI scale. Without the EIU and the GICRR, it would have been ranked 91. This is likely because these coarse indices are also much “stickier”: if you only have a 5- or 7-point scale, it’s a much bigger deal to change a country’s ranking on that scale.

Figure 4. Actual versus counterfactual rank of Ukraine if EIU and GICRR were not used in CPI calculations.

An obvious takeaway of this exercise is that even though it’s tempting to use more data to get a more “comprehensive” picture of the situation, in this case, these components are probably harming rather than helping the tracking of how corruption is evolving over time.

The last graph I will leave you with is the CPI ranking of Russia versus Ukraine. Many Westerners are used to grouping all Eastern European countries together in their mind, undoubtedly because of the USSR. But even if Russia and Ukraine were at some point part of the same country, their paths have diverged significantly since the fall of the USSR. Since 2012, Ukraine has made great strides in fighting corruption. Meanwhile, Russia has stagnated and has even moved backward several times over this time period. Since 2020, it dropped from 129 to 141, which is worse than its ranking in 2012. Defeating Russia, therefore, would not just be a victory for democracy and a peaceful world order, but an important step in the fight against corruption. By contrast, letting Russia’s influence grow through military victories would allow it to spread its dishonest practices further.

Figure 5. CPI ranking of Ukraine and Russia.

* A methodological note. I couldn’t perfectly replicate the CPI rankings from their components because the publicly available CPI components are rounded. I therefore compared my counterfactual rankings (which are also affected by the unavailability of unrounded components) to rankings I generated myself based on the rounded CPI components. In most cases, these rankings were identical to the original CPI rankings, so the conclusions of the exercise are highly unlikely to be affected by the unavailability of unrounded values.

Originally published at https://ukraineinsights.substack.com.

--

--

No responses yet