MEL Approach, Principles and Standards - English

MEL Approach, Principles and Standards - Spanish


This document outlines the approach, principles and operational standards that guide CARE’s Monitoring, Evaluation and Learning practices for projects and initiatives implemented around the world, which can also be applied to the work CARE does with and/or through partners. It combines key elements from longstanding policy documents like the CI Program Principles and Program Standards,the CARE Evaluation Policy, and others; together with elements from the broader MEL debate in development and humanitarian work. As a result, it provides with an updated view on how CARE defines and operationalizes MEL.


THE FOUNDATIONS OF CARE’S MEL APPROACH (October 2017)


The foundation of CARE’s approach to Monitoring, Evaluation and Learning is the recognition that we work in very dynamic and complex contexts, where lasting social change does not follow a linear timeline or a single pathway, where multiple stakeholders interact and influence each other as well as our interventions, and where there are constant adjustments in social, economic, structural, environmental or other dimensions that we must be critically aware of and adapt to (figure 1).
Figure 1: Lasting Change as defined by CARE
MEL approach photo.pngUnder these circumstances, our organizational ability to demonstrate the impact of our work and explain how we contribute to lasting change lays in the ability of CARE’s projects and initiatives to put dynamic MEL systems and practices in place. That means, MEL systems that continuously generate comprehensive explanations and evidence on the way we think about a situation or problem and its underlying causes; a process of desired social change[1]; how CARE’s interventions contribute to that change and how other factors and critical preconditions take place in society in order for that change to happen. In summary, MEL systems and practices become critical to “unpack” the WHO, WHAT, HOW and WHY of social change:
  • WHO are the specific populations (women, girls, men and boys) ultimately experiencing lasting change, and who are the other actors facilitating that change?
  • WHAT changes are those populations experiencing?
  • HOW and WHY are those changes happening and what role does CARE and other actors play in facilitating those changes?

Applying this approach also implies that CARE’s MEL systems and practices put special emphasis on explaining social change and impact as a combination of our actions plus the influence of other critical factors that make a change process possible (contribution), and only when appropriate, our MEL systems focus on purely explaining social change as fully attributed to CARE’s actions (attribution). Although explaining attribution can often be considered a more robust way to show evidence of impact influenced by a set of interventions, we strongly believe that CARE’s contribution to social change is influenced and enriched by multiple actors and contributing factors. Therefore, our potential to explain complex change and multiply impact is enriched when we are able to complement a rigorous causal analysis with the explanation of the many elements influencing change, and the role of different actors have in facilitating that change.
Important Note: CARE’s MEL approach includes the following definitions:
  • Long-term or ultimate outcomes - Impact: includes sustainable, significant and measurable changes in well-being, materialized in lasting changes on poverty and social injustice conditions of a particular population. Changes at the impact level are influenced by those factors directly addressed by a project or initiative, as well as other factors.
  • Immediate and intermediate outcomes - Outcome: includes changes on individual behaviours (e.g. individuals putting into practice new knowledge, attitudes or commitments) and changes that are structural or systemic (e.g. policy changes, new practices in service provision), that can be seen in different populations. Outcomes are often a result of what participants do on their own, influenced by the actions of a project or initiative or other factors.
  • Output: includes the direct results of activities implemented by a project or initiative. Outputs may refer to: a) The results of training, such as the number of women trained in improved nutrition, farmers in improved agricultural techniques, etc. b) Capacity building, such as the number of extension staff trained, water systems built, committees established, etc.; c) Service outputs, such as an increase in the number of program locations; d) Service utilization, such as the number of people fed, or number or patients treated. Outputs are the products a project or initiative generates through the implementation of its activities.
  • Inputs: Includes the set of resources that are needed by a project or initiative in order to deliver its commitments. These include the human and financial resources, physical facilities, equipment, materials, logistics, in-kind contributions and operational polices that enable services to be delivered.

In CARE, we acknowledge the fact that different actors use different definitions, nevertheless, note that this doesn’t affect the way the MEAL approach, principles and standards are defined and can be applied.

[1]Social change understood as the overcoming of poverty, enjoying equitable opportunities for women and men, being part of inclusive development processes and being able to continuously transform in response to new hazards and opportunities.



MEL PRINCIPLES (October 2017)


The fundamental propositions behind our Monitoring, Evaluation and Learning systems and practices in CARE include:
MEL systems and practices should be conducive to Accountability, by generating solid and accessible evidence that clearly and transparently explains CARE’s reach (what we do, where we work and the people we reach) and tells CARE’s impact story (our contribution to impact and outcomes); and by deliberately engaging and involving multiple actors and incorporating the perspectives of women, men, girls and boys in decisions and actions throughout the life of a project or initiative. This includes accountability to participants, donors and other stakeholders.
MEL systems and practices should be conducive to Learning and potentially to Multiplying Impact, by generating and documenting evidence that strengthens the organizational memory and expertise, plus energizes learning dialogues and the identification of successful models and/or opportunities for scale-up.
MEL systems and practices should be conducive to Adaptation, by tracking, interpreting and summarizing key data related to changes in social, economic, structural, environmental or other dimensions that a project or initiative should be critically aware of and constantly adapt to.
MEL systems and practices should always balance purpose, methodological rigor and capacity, by identifying the most appropriate combination of methods to address: purpose (contribution / attribution), evidence needs and uses, resources, capacity, technology requirements and other factors[1].
MEL systems and practices should always consider ethical implications and be conducive to gender equality, by ensuring honesty, consent and integrity of all MEL practices and MEL methods selected; always respecting the security and dignity of the stakeholders with whom CARE works; incorporating gender and power elements when monitoring and evaluating; generating evidence disaggregated by sex, age and other relevant diversity, etc.

MEL systems and practices should be dynamic and lead to action, by intentionally planning and executing various iterations of monitoring, evaluation, accountability and learning moments throughout the cycle of a project or initiative, and have a formal connection with informed decisions and actions.
MEL systems and practices in projects and initiatives should contribute to CARE’s global evidencing efforts, by generating evidence and engaging on reflection around CARE’s collective reach and impact story, globally.
[1] In the current MEL debate, we may find inclination towards assessing attribution, sometimes without adequate resources or without considering the sensibility of applying certain methods in certain context. Being clear on the methodological appropriateness helps solving challenges around this. For example, if an intervention is seeking to generate evidence if impact, plus also validate a model or innovation, the selection of evaluation methods will be influenced by CARE’s global program priorities, the nature of the intervention, the rigor required, donors requirements, uses of the evidence, the capacities in place and resources available, etc. Normally, this leads to a combination of quantitative and qualitative methods.


MEL STANDARDS(October 2017)


From a practical perspective, the definition of a MEL system for projects or initiatives in CARE should consider the following:

Design your MEL system based on a clear theory of change and evidence needs.
Have a clear definition of participants: direct/indirect participants and target/impact groups.
Define a meaningful and manageable set of quantitative and qualitative indicators and/or questions for impact, outcomes and outputs in each participant group, and the methods to track them.
Define the monitoring and evaluation moments and methods that best ensure robust and comparable tracking of outputs, outcomes and impact.
Ensure your evidence can be translated into learning and support on the identification of potential for scale.
Make your evidence accessible, and ensure your MEL practices are participative and responsive to feedback.
Use your MEL system to continuously read the context and adapt to it.

Please see below for practical references and guidance for applying of the standards:

What the standards is about...
Practical references and guidance for applying the standard
Design your MEL system based on a clear theory of change and evidence needs.

Projects and initiatives are normally designed based on a holistic analysis of context and stakeholders, plus a theory of change or any similar type of comprehensive explanation of the desired changes, the different pathways to get to the desired change and causality. The core of your MEL system should be designed to continuously test the Theory of Change of the project and initiative, being able to answer questions like the following:
  • What are the key outputs and activities the MEL system will track in order to inform if the implementation of activities is in the right track, and reaching the expected participants? (direct and indirect participants)
  • What are the key qualitative and quantitative changes (impact and outcomes) the MEL system will track in order to inform if CARE is contributing to significant and lasting changes? Which pathways of change and causality relationships will we track? Who are the actors we will focus on when tracking those pathways? (impact and target populations)
  • What are the key risks and assumptions the MEL system will track and review during implementation in order to ensure the project or initiative is responsive to the context? How will unintended consequences or emerging changes be part of the continuous testing of the theory of change?
  • What are gender, governance and resilience considerations the MEL system will track?
For guidance on how to design a theory of change, please see

LT programs guidance.png
(Page 26) CARE's Guidelines for Designing and Managing Long-Term Programs


Have a clear definition of participants: direct/indirect participants and target/impact groups.

Participants reached and impacted trough CARE projects or initiatives include all individuals directly or indirectly affected by the problem the project or initiative seeks to address, benefited by the changes the project or initiative is contributing to and/or influenced by the strategies CARE uses to facilitate change. Tracking participants in a project or initiative, and generating evidence on the changes participants experience requires clear definitions and representations on who these individuals are. Participants, in CARE, are categorized based on the following criteria:
Criteria 1 - REACH: The way an individual is directly or indirectly involved in activities and benefits of the project or initiative’s interventions: direct and indirect participants

participants projects photo.png
participants advocacy photo.png
Criteria 2 - IMPACT: The way an individual either experiences significant and lasting change facilitated by the project or initiative, or facilitates change for others: impact population/target population.
imapct target group photo.png


The two ways of defining participants respond to two different ways of explaining who we work with, therefore it is not recommended to make direct equivalences (e.g. direct participants are not necessarily equivalent to impact groups). In CARE, we use both ways as they respond two different purposes when explaining our work:

  • Direct/indirect participants are used for reporting on the REACH of CARE’s work, and this helps us determining the people directly and indirectly involved in CARE activities, receiving or not services/goods/resources, from CARE or through a partner.
  • Impact groups/target groups are used when looking at the IMPACT of CARE’s work, and this helps us determining the people experiencing change/impact (impact groups) from the people that facilitate and/or influence those changes (target groups).

Define a meaningful and manageable set of quantitative and qualitative indicators and/or questions for impact, outcomes and outputs in each participant group, and the methods to track them.

Make sure to incorporate at least one of the CARE’s Global Impact and Outcome Indicators, together with any other supplementary indicator that is relevant or required (e.g. by the donor). Recommendation: Try to avoid creating new indicators or indexes.
Quantitative indicators or questions would regularly help you demonstrate WHO are the specific populations experiencing change? (e.g. women of reproductive age; policy makers) And WHAT changes are they experiencing? (e.g. increased safe births; improvements in policy that guarantees women’s access to quality SRMH services). Qualitative indicators or questions would regularly help you demonstrate HOW and WHY are those changes happening? What role does CARE and other actors play in contributing to those changes? (e.g. is the increase in safe births explained by CARE’s strategy and work with health centers and communities? Or is it the product of changes in how decision makers recognize the importance of women accessing quality SRMH services ant take action on their own?).
Based on the indicators and questions you select for tracking impact, outcomes and outputs, define the most appropriate combination of methodological approaches to track them.

Define the monitoring and evaluation moments and methods that best ensure robust and comparable tracking of outputs, outcomes and impact.

4.1 Monitoring outputs and participants

Define the moments, tools and resources used throughout the life of the project or initiative to track outputs from all those key activities being implemented (e.g. health staff from health services participating in training). While collecting and analyzing data at this level, the MEL system won’t generate explanations related to impact or outcomes, but will regularly ask if all the activities and outputs are the most appropriate and if they are really setting the bases towards the expected outcomes and impacts.

Important considerations when monitoring participants (direct/indirect or impact/target groups):
  • Participants are always individuals. Even if our projects or initiatives work with households, communities or institutions, these are always composed of individuals, therefore, should ultimately monitored as individuals.
  • One individual can be reached by one or more project or initiative in one particular context. The monitoring actions should be aware of duplications with other projects or initiatives, and establish the mechanism to report data without double counting.
  • Participants’ data should normally be disaggregated by sex, age and potentially by disability or any key criteria related to the problem or vulnerability the project or initiative seeks to address. Estimations based on statistical references (e.g. census) are not always the most accurate measure. If the disaggregation is made using estimates, the source of the ratio must be explained.
  • In projects or initiative implemented in the course of multiple years, the total participants in a particular year should be cumulative and single counted (existing and new participants). Even though it is important to know the incremental process, participant’s information is not normally aggregated year by year.

4.2 Monitoring of Outcomes

Define the moments, tools and resources used throughout the life of the project or initiative, to track key behavioral changes in some actors or strategic elements that set the causal linkage between outputs and outcomes and impact. Outcome monitoring helps generating indicative information (qualitative and quantitative) of what’s changing and what’s not / what’s working and what’s not, as the project or initiative advances towards the expected outcomes. For example, what happens after health staff participates in training? Do their behaviors change? How does changes in behavior favor women’s access to SRMH services? Outcome monitoring can be a continuous action (e.g. performing participant observation or doing informal interviews constantly), or a periodic action (e.g. applying an annual questionnaire or survey). In all cases, outcome monitoring may or may not have the same levels of representativeness of an evaluation, nevertheless, it does provide with important indications of progress and learning around the way the project is progressing towards contribution to change, the appropriateness of the strategies used and the validity of the assumptions in the theory of change.

Important considerations when operationalizing outcome monitoring actions:
  • Is the volume of data, the frequency with which data is collected and the moments in time monitoring actions are undertaken, and the most useful for the project or initiative?
  • Are the monitoring actions (collecting, reporting or analyzing data) considering the availability of time and predisposition of project staff or to project participants?
  • Will all the data generated by monitoring actions be used and disseminated, and will inform decisions on the implementation or the theory of change of the project or initiative? Note: If that is not the case, you may be collecting more data than you actually need.

4.3 Evaluation

Define the moments, tools and resources used throughout the life of the project or initiative, to objectively assess its relevance and fulfillment of objectives, its efficiency, effectiveness, impact and sustainability, and/or its worth or significance (based on the OECD/DAC definitions). Evaluations in CARE projects and initiatives can be carried out for different purposes and take a variety of forms (see descriptions below). Nonetheless, all evaluations need to provide with substantiated evidence of the changes that took place as a result of a project or initiative’s actions, and a plausible explanation of how CARE’s actions contributed to the materialization of those changes.
  • Formative evaluations: carried out during implementation of a project or initiative, intended to improve a project´s performance, informing necessary adjustments of project in relation to project design, planning, resources, approaches and methodologies, and capturing lessons and promising practices that inform decision-making (e.g. real time/mid-term evaluations of any project or initiative).
  • Summative or End-line evaluation:often carried out at the end of a project, intended to assess the extent to which expected outcomes have materialized and assessing its significance or relevance (end-line evaluations).
  • Impact evaluations: carried out either during or after the implementation of a project or initiative, intended to demonstrate impact in a cause-and-effect manner to an intervention. In impact evaluations, the focus shifts away from what CARE is doing, to observe and track the changes that take place in the lives of the impact groups, and how these changes come about. Impact evaluation normally entails a step further than any other type of evaluation and implies a deeper look to the participants and the changes they experience, plus collaborating with others in order to explain how these changes were facilitated by the project or initiative. As a result, it directs all is attention to test the theory of change behind the project or initiative and demonstrate how CARE contributes to that.


Important considerations when operationalizing evaluations:
  • Evaluations should provide with complete and comparable assessments of the before-after or with-without situation.
  • Evaluations should assess desired as well as unexpected outcomes.
  • Evaluations can be conducted or supported by qualified professionals who establish and maintain credibility in the evaluation context. However, CARE staff should be highly involved in the whole evaluative process from the very beginning, not only to guarantee ownership of the process but also to open opportunity to strengthen MEL capacities and to learn.
  • Evaluation results need to be processed and reported in multiple ways and addressing different stakeholder needs and purposes. Evaluation results should be accessible for learning and for encouraging the project and participants to rediscover, reinterpret, or revise their understandings, plans and behaviors.
For guidance on hiring an external evaluator:


Template to develop a ToR for an external evaluation (with examples):



Examples of identifying and addressing double counting


Evaluation Report Template

Ensure your evidence can be translated into learning and support on the identification of potential for scale.

Monitoring and Evaluation actions normally generate a great amount of data and evidence, therefore, can naturally contribute to a structured body of information and knowledge inside and outside of. However, data and evidence can only be useful for learning and for multiplying impact, when data and evidence is adequately organized, processed, analyzed, discussed and shared.
Important considerations to link monitoring and evaluation with learning:
  • Define a learning agenda from the very beginning of the project or initiative, around the following question:
    • What is it that we want to learn from the implementation of this project or initiative?
    • Will the data or evidence to be captured by the MEL system support learning in general or advance critical learning on a particular issue?
    • Will it potentially generate evidence for multiplying impact?
    • Is the monitoring and evaluation data sufficient and relevant enough for that learning or will we need additional research in a particular area?
Note that we can’t learn every single aspect of our work. Prioritization in a learning agenda is critical.
  • Make sure your monitoring end evaluation data and evidence is well organized and hosted in a safe and accessible system or platform.
  • Open specific moments in the life of the project or initiative, to share and discuss findings in ways that are understandable and useful to various stakeholders - participants and partners, staff of various units within the CARE consortium, as well as donors.
  • Whenever possible, include external actors on monitoring or evaluation teams (e.g. project staff, representatives of other CARE projects or partner agencies, etc.).


Make your evidence accessible, and ensure your MEL practices are participative and responsive to feedback.

CARE’s commitment to accountability implies that projects and initiatives promote transparency in their actions, information and decisions, encourage participation from different stakeholders to shape their work, and deliberately open channels for feedback and take action based on feedback.
Important considerations when linking monitoring and evaluation with accountability:
  • Ensure your MEL actions balance the moments for data/evidence collection with moments for actors to provide feedback to CARE, and make sure to connect this feedback to the appropriate instances, so that feedback is always followed by action.
  • Define how and in which moments will MEL staff, program managers and other CARE and non-CARE actors will engage and collaborate in all the different steps of generating and using data, analyzing and responding to feedback, as well as making decisions for adaptive management.
  • For humanitarian projects, ensure the MEL system embeds a complaint and feedback mechanism that is comprehensive and au pair with Core Humanitarian Standards.
  • Make sure the targeting strategy of the project or initiative and the definition of participants promote equity and address the needs of the most vulnerable groups.
  • Make sure key information generated by your MEL system is accurately reported and available for all the organization. This can be via the PIIRS systemor the Reach and Impact Map.
  • Make sure your evaluations are well documented andpublically available in CARE’s Electronic Evaluation Library http://www.careevaluations.org.

Use your MEL system to continuously read the context and adapt to it.

Adaptive approaches are increasingly and undeniable relevant to address complexity in the contexts in which we implement projects and initiatives. Our capacity to adapt covers many other areas of organizational culture, structures, processes and capacities that go beyond MEL purely. However, MEL systems can be highly instrumental for adaptation.

Important elements to consider when linking MEL to adaptive management:
  • Your MEL practices need to be agile and have the capacity to collect data, generate evidence, identify changes and generate recommendations more frequently.
  • The MEL system should include regular review points when monitoring and feedback data is assessed against the theory of change, so that adaptation can occur accordingly.
  • Your MEL system should dedicate considerable effort to rapid learning and very agile feedback, in order to inform changes.
  • Your MEL system needs to be flexible, adjusting indicators, methods, tools and resources based on potential changes of the overall design of the project or the initiative.
  • Your MEL system should be clearly linked with decision making instances, in order to make sure that data and evidence signaling need for adjustments are taken into action.