Monitoring, Evaluation, Accountability, & Learning (MEAL)

  • Monitoring, Evaluation, Accountability, and Learning is the data-gathering system of a development project. In this holistic system, the implementation is tracked, the deeper project impact is measured, all stakeholders are given necessary data, the insights are utilized and reflected on, and the project adapts based on this shared evidence (PM4NGOs 2020, 143; Stetson et al. 2007, 84).

  • Monitoring describes the process that continually collects and documents data on the day-to-day project implementation to determine if set targets are being reached. Monitoring mostly draws on tracking indicators set out in the logical framework and determining the degree to which outputs are being reached. For example, in a project based on capacity building for a certain population, classic monitoring data would include the number of trainings given or the attendance at these trainings. This data on the actual implementation of the project serves as the foundation for the rest of the MEAL system (PM4NGOs 2020, 143).

  • Evaluation involves gathering data about the project that is more in-depth than that of monitoring data. This form of data gathering looks more at the impact of the activities on the lives of beneficiaries and the sustainability of the project. This can include surveying participants to understand how their lives are being affected by the project (PM4NGOs 2020, 145; Stetson et al. 2007, 84).

  • The idea of accountability gets at making sure the insights gathered from the monitoring and evaluation system gets to the right people. The project manager has vital role in making sure what is being discovered through the monitoring efforts is effectively transferred to all stakeholders, so every level of project staff can be held accountable for improving implementation practices as needed (PM4NGOs 2020, 145).

  • Our favorite word here at the ALTA toolkit! Learning is what makes the data from monitoring and evaluation actionable. Implementing learning into the overall Monitoring and Evaluation system of a project requires that stakeholders take time to reflect on how the project is advancing, what issues and delays are relevant, and determine how to adapt accordingly (PM4NGOs 2020, 145).

A Case Study in How to Implement MEAL in Practice: USAID’s Wildlife Asia Project

Source: Ziegler, Jessica. 2020. “USAID Wildlife Asia as a Case Study in Adaptive Rigour: Monitoring, Evaluation and Learning for Adaptive Management.”

Case Background and Approach to MEAL

The United States Agency for International Development (USAID) implemented a 5-year project in Southeast Asia and China to lower consumer demand for wildlife products, build capacity among regional governments to enforce the law, and foster governmental buy-in to stop wildlife trafficking.

USAID’s Wildlife Asia team sought to incorporate MEAL systems even before the project was implemented by using data to understand the root problem driving wildlife trafficking. The team then tested interventions on a small scale and used the monitoring data from the tests to inform the process of building up the implementation (Ziegler 2020, 3).

The project’s own definition of MEAL for adaptive learning deserves to be quoted: “…making informed decisions by drawing from the programme’s own quantitative and qualitative data, while also continually learning from others beyond the programme, and synthesising this learning into action through systematic reflection processes with critical stakeholders” (Ziegler 2020, 3).

I think the idea of turning learning into action is key here. It made me reflect for myself on this question: If you simply complete the monitoring and evaluation plans, but don’t utilize the data for your own learning, share that learning with your partners, and then act on it, what sorts of insights are you letting fall through the cracks that could have allowed you to have a deeper impact?

How did USAID Wildlife Asia act on their learning?

USAID’s monitoring data were regularly brought into team meetings to discuss and were included in quarterly reports to USAID. Notably, each team on the project was given specific performance monitoring data and encouraged to reflect on the data and what it means for the reality of how they were doing in the implementation process. This also helped the staff feel more comfortable with data engagement, gave them time to learn from it, and kept data engagement from being siloed to the M&E team (Ziegler 2020, 8).

Additionally, the monitoring reports showed that ivory was significantly demanded by Chinese tourists. Having this knowledge, the USAID Wildlife Asia team connected with stakeholders in the hospitality industry to reach foreign travellers and implement behavior change communications to reach project goals (Ziegler 2020, 8).

The project would also implement “pause and reflect workshops” to have each team set time aside to reflect on the project from a high level. For example, teams would review the logic in their results chains, what previously held assumptions they had, and what implementation practices should change based on their reflections (Ziegler 2020, 10).

What does this mean for your project?

One of the biggest reasons why a project could fail to achieve the transformational impact it seeks is that the data from the MEAL system is never effectively utilized by stakeholders. That is, going through and understanding the data from monitoring and evaluation efforts can happen too late in a project cycle for it to be acted upon, or the data is received on time but never engaged with.

Here are some practical ways to avoid this pitfall in your own practice:

  • Start monitoring early, even before implementation. Use baseline surveys or interviews as jumping-off points for designing the critical path and implementation plan.

  • Bring up the data from monitoring and evaluation regularly in tactical team meetings, making sure that your team is comfortable engaging with feedback from the field.

  • Schedule time for your own reflection and learning based on project feedback, and lead your team to do the same through regularly scheduled off-sites or team review days.