Humanitarian aid and international development » The project cycle »
Monitoring, evaluation and learning
- Page updated on18 de April de 2025

Although monitoring runs parallel to project implementation, it is presented here alongside evaluation and learning. Monitoring, evaluation and learning together include the processes of information collection, analysis and use. Its application helps us to learn more and report on the progress of activities, the achievement of results and the satisfaction of the population. It is also a fundamental tool for making better decisions, learning, producing knowledge and sharing it.
Table of contents:
What do we measure with monitoring?
On the one hand, monitoring consists of the continuous collection and analysis of information on the progress and execution of activities and the progressive achievement of the results that we had set as an objective. Monitoring allows us to check how the project logic is being put into effect.
To achieve this, all output and outcome indicators to be measured must be identified at the beginning of the project and included in a complete monitoring plan. This plan establishes the tools, mechanisms and processes needed to measure them and compare them with the baseline situation. Tools such as questionnaires, digital applications, interview or focus group scripts, databases, dashboards or digital dashboards to visualize the information, templates for data reporting and reports, etc., may be necessary. The analysis of the information collected should allow us to readjust activities or work plans if it becomes evident that what was initially designed does not adequately contribute to the achievement of results.
On the other hand, monitoring should also serve to measure the satisfaction of the people whose needs we respond to with the projects or the compliance with humanitarian standards. For this purpose, data are collected and analyzed on satisfaction with the goods and services received, on the perceived quality of the interventions and their adequacy to the needs, or on possible complaints and claims related to discrimination, abuses or other inappropriate practices that pose risks or harm to them.
Proper information management will also allow us to continue to monitor the needs of the population and changes in the context beyond the initial analysis prior to the start of the project. This monitoring will be useful to help us update our risk analysis, adapt activities, correct procedures and ways of working, and even identify new priorities and areas of intervention for our actions or future projects.
Indicators: what to measure and how to measure it
What is a SMART indicator?
An indicator is a parameter that we use to measure a result of our project, whether it is an output, an outcome, or the long-term impact. In addition, a good indicator must be SMART (specific, measurable, achievable, relevant and time-bound):
- Specific means that it must have a concrete and clearly defined purpose. For example, "number of people receiving family planning education". is a specific indicator for a output unique.
- Measurable means that it must have a usable preset way to calculate it. For example, an indicator such as "% of infants who have received only breast milk during the previous day and night." is perfectly measurable. However, it requires many questions about possible foods with a questionnaire, which can lead to errors. There are also indicators such as "coverage of essential health services". which are actually an index composed of many other indicators. Therefore, not just any actor can measure it.
- Achievable means that the indicator and its target must be realistic. This is difficult if there are no standard references, or previous experience performing similar activities in similar contexts.
- Relevant means that the indicator must reflect the phenomenon we are trying to measure. For example, we often use the "number of beneficiaries reached" But it is really only relevant if it is always calculated in the same way and in relation to activities of the same nature. What is the point of comparing the number of people receiving treatment for a chronic disease with the number of people receiving a hygiene kit? In other words, an indicator may be relevant for a certain use but not for others.
- Temporal means that the indicator should be intended to be measured at specific times, either on a monthly basis or at the end of a given year, for example.
The use of indicators is often a collaborative effort.
In practice, it is very common to use pre-designed and more or less standard indicators from the Sphere Handbook, from humanitarian cluster guides, from donors, or from each organization. This means that it is not necessary to spend too much time designing new indicators, that SMART indicators that have already proven their worth in the past can be used more or less systematically, or that the results are comparable over time, between actors and between geographical areas. However, there is no single standard set of indicators but many, and these will not always be relevant to our interventions or easily measurable.
In other cases, there are indicators that only the health system has the capacity to produce, because they are calculated with the aggregation of data from many health units, or because they require a population survey that the Ministry of Health must, at least, authorize. Therefore, it is common for organizations to support the local health information system, seeking synergies.
Measuring and using indicators is not an easy task.
Working with outcome indicators is particularly difficult. For the indicator to reflect a change or result achieved, it is not enough to count outputs of an activity; consequences of these outputs must be counted - which in itself is more difficult - and also put in relation to a reference population, so that initial and final values can be compared. This not only complicates the calculations, but is subject to enormous distortions if there is no accurate information on the reference population, which is a common problem in humanitarian contexts. Sometimes reference population data come from old censuses or mere estimates.
It should be noted that more is not always better. Although it is always said that what is not measured does not exist, it is also true that sometimes when we want to collect more information we end up doing worse and end up having more data but less valid and reliable. On top of that, even when the information is collected correctly, we are not always able to make a good analysis to better understand the effects of an intervention or to use the results of this analysis to make better decisions.
Beyond the quantitative: qualitative methods and in-depth evaluations
Having a figure allows us to compare it, but it is often not enough to understand why or how that figure was arrived at. Therefore, the use of qualitative methodologies such as individual interviews or focus group sessions with scientific rigor is key to understand the results of a survey on behaviors and practices, of a health service user satisfaction indicator, or the barriers that people encounter in accessing humanitarian assistance, for example. In addition, it is usually not possible to analyze the impact indicators of a project by monitoring alone, as they refer to very distant results that can be greatly affected by many other variables, such as the socioeconomic situation of the population or the actions of other humanitarian organizations operating in the same geographical area.
In-depth evaluations and studies, either during the project or after its completion, allow us to understand many aspects that escape routine monitoring. Evaluations are usually conducted by people outside the organization who have not been involved in the design or implementation of the project, and often use combinations of various methods of information gathering and analysis to shed light on the relevance, efficiency, effectiveness, impact and sustainability of the project: what works, how and why?
Related blog articles:
The challenge of learning and extracting lessons
If obtaining information and analyzing it is difficult enough, using it to make things better is even more so. The goal should never be to have data, but to use it for accountability and to identify how to improve our future humanitarian interventions.
Humanitarian organizations often devote significant efforts to learning from their actions through the exchange of experiences among their professionals and the extraction of lessons learned. All this is reflected in guidelines and recommendations, although these are not always adequately based on scientific evidence obtained with sufficient rigor to be replicable or applicable in other contexts.
The use of scientific evidence in humanitarian action requires not only a strong investment and commitment to operational research, but also to updating and developing the skills of the organizations' technical staff, aspects that are often relegated to the background, behind so many other priorities that are always greater and more urgent.
The project cycle
External links
- People in Need. Indikit: Guidance on SMART indicators for relief and development projects.