Towards an aid quality index

Donors regularly make grandiose claims and promises, but measuring whether or not they live up to them requires clear aid quality measurement indicators.

A number of academics and institutions are working to create such indicators now that more development aid data is being made public.

“We’re at the start of the attempt to figure out what aspects of aid quality we can metricate and what makes sense depending on what we can get data on,” Karin Christiansen, director of Publish What You Fund (PWYF), told IRIN. “No one’s done this with the Paris data [aid effectiveness commitments made by development donors at a Paris conference in 2003] before, so it’s very exciting.”

Indices measuring different aspects of development aid already exist but are not yet purely focused on analyzing accountability, say analysts.

The commitment of the Center for Global Development (CGD) to a development index created by mathematician David Roodman is a good start, rating 22 countries on how much they help recipients build wealth, good government and security.

In 2009 Sweden, Denmark, the Netherlands, Norway and New Zealand ranked highest; while South Korea, Japan, Switzerland and Greece ranked lowest.

Hitherto, major donors have been evaluating each other’s performance through the Development Assistance Community’s (DAC is part of the Organisation for Economic Co-operation and Development) peer review system. But analysts need to go further, says PWYF’s Karin Christiansen.

Independent index

The initial step in creating a comprehensive, independent index, said analysts at a recent aid transparency conference in Oxford, is to pin the analysis on promises donors have already made - in this case, the 2003 Paris Declaration and 2008 Ghana aid effectiveness statements. In these promises, donors agreed: To ensure developing countries exercised effective leadership of development policies; to base support on countries’ national development strategies; to harmonize their aid to reduce transaction costs, and to make it more transparent.

World Bank economist Stephen Knack, the Brookings Institution’s Homi Kharas, University of Birmingham’s Pranay Kumar Sinha, and New York University’s Claudia Williamson used these promises as a starting point in indices they are working on.

Kharas measures donors against four indices: maximizing aid impact; reducing the burden of aid on recipients; fostering [local] institutions; and transparency, with indicators including how much of a budget goes to administrative costs, how much aid is tied or untied, or how well reported aid is.

Preliminary results show when it comes to reducing the burden of aid on recipients, the USA fares badly, coming 30 out of 32; the European Commission (EC) fairly low at 23, and the UK better at seven; but the USA ranks higher for transparency - at 12, the EC ranks 18 and the UK seven again.

Stephen Knack’s preliminary research adds new categories to the mix, such as the degree to which aid is needs-based, and shines the spotlight on multilaterals as well as bilateral agencies.

Williamson, meanwhile, evaluates donors along best practice standards of transparency, overhead costs, harmonization and coordination, and delivery to effective channels, concluding that aid is still too fragmented, and overhead costs vary widely across different donors. Canada, France, the Netherlands, the UK and the USA rank highest on transparency, based on 2008 aid reporting figures; while UN agencies and the Global Fund rank among the lowest.

Taking stock

PWYF is currently taking stock of all of the above, as well as other accountability measures already out there, and will add some of their own, to create their own comprehensive aid quality index, ready for a test-launch at the end of 2010, according to Christiansen.

Variables PWYF will include are how donors report their aid to databases like through the DAC database; how accessible their aid information is; and whether donors are members of the International Aid Transparency Initiative (IATI). PWYF will also draw on data from Debt Relief International on recipient governments’ assessments of how transparent donors are about giving forward-planning information; and from the EU Aid Watch aid transparency survey in which civil society organizations across the 27 EU states assess the transparency of their governments.

Stumbling blocks

Evaluating donors may become easier as more information becomes available, but the weighting given to each variable can skew the statistics, points out Sinha in his preliminary study entitled Can a Useful Aid Effectiveness Index be Developed Using the Paris Declaration Framework?

Furthermore, the lack of standardized definitions for many aspects of aid including “effective aid” and “transparency” continue to make it difficult to come up with accurate measurements of quality aid.

Choosing the right indicator to measure against is also tough, Christiansen points out. For instance, “setting up parallel institutions and processes” is often seen as duplication, but sometimes - for instance in fragile states - donors have to set up new institutions where existing ones are not working, she said.

In the end, to work for these indices to have the most impact, analysts should be given the biggest say in evaluating aid quality on the ground, Christiansen stresses.

“When it comes to measuring aid quality and transparency, donors respond when it is close to their experience, so the best impact we can have is when we measure it at the country level - then we will see donor behaviour start to change based on the results.”