
Evaluations are critical to understanding whether the efforts of humanitarian organizations are successful. Are they contributing to alleviating the suffering of the most vulnerable people in humanitarian crises? And are they doing so in the best possible way? These questions, which are only the tip of the iceberg of an evaluation, are the key to learning how to improve interventions and to being able to demonstrate the value of humanitarian action. They are also essential for accountability to the people and institutions that support humanitarian interventions, and the people and communities who are affected by crises and who play a key active role in responding to them.
An evaluation, a key component of monitoring humanitarian actions, must provide credible and useful information for organizational learning. It should produce lessons learned and help incorporate them into future decision-making processes. However, achieving these objectives in the midst of humanitarian emergencies is particularly difficult. Where to start? By framing everything properly with sound criteria for evaluating humanitarian action.
First of all: What do we want to evaluate?
Over the last few decades, much progress has been made in systematizing evaluations of humanitarian action. The first advances were made in the 1990s, with the commitment and will to correct the mistakes and overcome the weaknesses of the humanitarian efforts of the time. Since then, and based on the initiative of the Organization for Economic Cooperation and Development Development Assistance Committee (OECD-DAC), a series of criteria were proposed for the evaluation of international development cooperation and humanitarian aid. These criteria have been evolving, being revised and transformed, also with the contributions of numerous organizations, such as ALNAP.
The use of more or less standard criteria for the evaluation of humanitarian action helps to systematize these exercises, identify weaknesses and common problems of these actions and generate knowledge based on experience.
Effectiveness
Effectiveness is possibly the most common evaluation criterion. It measures the extent to which the intervention achieves its objectives and outcomes (both intermediate and final), in terms of the change it is expected to produce. This is also closely linked to the ability to achieve these objectives in a timely manner, due to their emergence. There are, however, several challenges in assessing this criterion. One is that humanitarian contexts are often highly changeable. Another is because, when community participation has not been given due importance, the objectives set may not really respond to their needs and priorities. We may fail to evaluate effectiveness if we focus on measuring the achievement of irrelevant objectives.
Relevance
Relevance measures whether the intervention is doing the right things, and to what extent its design and objectives respond to the needs of the people affected by the crisis and the priorities of other actors. Relevance is a fundamental criterion, often assessed at the same time as effectiveness, because of the close relationship between the two. Power differentials can bias its assessment, if community participation is not integrated and if localization efforts are not made to empower local stakeholders in determining what should be considered relevant.
Efficiency
Efficiency measures the relationship between the results obtained and the resources used to achieve them. Its evaluation also allows us to know to what extent the resources are being used economically and appropriately. By resources we refer to material elements, economic funds, personnel and time. However, the social cost and the environmental impact of the actions must also be considered. The evaluation of this criterion sometimes requires analyzing whether alternative approaches to achieving an outcome have been studied, and whether or not the most efficient one has been chosen.
Impact
Impact assesses the extent to which the intervention produces, or is expected to produce, broad, transformative positive effects with long-term consequences. However, measuring impact is particularly difficult in humanitarian contexts, due to short implementation timeframes and the difficulty of establishing cause and effect relationships. For this reason, this criterion is sometimes beyond the scope of many evaluations of humanitarian action.
Coverage
Coverage is a relatively common criterion for evaluating humanitarian action, although it is not among the fundamental criteria proposed by the OECD-DAC. However, it is a central aspect related to social justice and the commitment to leave no one behind. Assessing this criterion requires consideration of the geographic scope of the intervention, the prioritization of the most vulnerable people affected by the humanitarian crisis, and the proportionality of the response to their needs. However, it is also essential to understand that, on many occasions, coverage should not be sought at the expense of quality or equity.
Coherence
Coherence is a complex and often difficult criterion to understand or integrate into evaluations. It was added to the master list in 2019, and focuses on how an intervention aligns with other initiatives, policies and commitments at the local, national and international levels, and whether it generates synergies or conflicts with them. Within the coherence analysis, the consistency of the intervention with the commitments and programs of the same organization or sector is also assessed, as well as the extent to which it helps to complement the efforts of other actors. This criterion makes it possible, for example, to assess whether the intervention respects international agreements on human rights, gender equity and other cross-cutting priorities. Its analysis can be key in contexts where the lack of coordination among actors reduces the effectiveness of the response.
Sustainability and connectedness
Sustainability assesses whether the benefits of the intervention continue or are likely to continue over time. It is closely related to connectedness, which refers to the need to ensure that short-term humanitarian interventions are implemented with interconnected, longer-term issues in mind, such as development, resilience, disaster risk reduction and peacebuilding. All of this aligns nicely with the humanitarian-development-peace nexus.
That's not all: cross-cutting criteria for evaluating humanitarian action
Beyond the fundamental or most widespread evaluation criteria, there are many other dimensions to be considered when evaluating each of them. Among them, and depending (of course) on the objective of the evaluation, the nature of the humanitarian action being evaluated or the identity and priorities of the organization in question, more attention is paid to one or the other.
These cross-cutting evaluation criteria may include inclusion, equity, diversity, gender, accountability to affected populations, participation, communication with communities, protection, or adaptive management, capable of adapting to changes in the context, new information and results obtained.
How to identify the key questions to evaluate these criteria?
The evaluation criteria are a great framework for work and reflection, but the key to a good evaluation lies in the questions that we will try to answer with it. The identification of these questions, adapted to the context and to the priorities and objectives of our evaluation, should always come first. Once the most important ones have been identified, however, they can be aligned with the standard evaluation criteria, which will allow us to identify additional questions of interest. As some guides suggest: "it is the questions that matter, not the criteria". Moreover, rarely can or should all evaluation criteria be used.
In many cases, it will be advisable to focus only on a handful of questions that are considered a priority because they are action-oriented and cover precisely one area or aspect in which you want to improve performance. This may offer better results than trying to obtain a general overview, which, because it is too broad, ends up being generic and not very useful.
In many cases, it will be necessary to make a selection of priority questions that will necessarily leave others out. Factors that can be used for this prioritization include questions such as the following: Would the answer to this question have immediate application to our ongoing programs and projects? Can an evaluation (and only an evaluation) effectively answer this question? How fundamental is this question to our mandate or mission? What proportion of the most vulnerable target population could benefit from the answer to this question? Can the answer to this question substantially improve the quality of our intervention? And help us reduce its costs without sacrificing quality or equity?
Blog
External links
- ALNAP, 2023. Review of the OECD DAC criteria for evaluating humanitarian action: a mapping of literature, guidance and practice.
- OECD, 2023. Applying a human rights and gender equality lens to the OECD evaluation criteria.
- OECD, 2021. Applying evaluation criteria thoughtfully.
- OECD, 2019. Better criteria for better evaluation.
- ALNAP, 2016. Evaluation of humanitarian action (EHA) guide.
- ALNAP, 2006. Evaluating humanitarian action using the OECD-DAC criteria: An ALNAP guide for humanitarian agencies.