Monitoring is a process to periodically collect, analyse and use information to actively manage performance, maximise positive impacts and minimise the risk of adverse impacts.
It is an important part of effective management because it can provide early and ongoing information to help shape implementation in advance of evaluations.
Monitoring processes can monitor change and progress in different aspects: needs, the operating context, activities, and the results of activities, projects, programmes and policies.
Monitoring is best thought of as part of an integrated Monitoring and Evaluation (M&E) system that brings together various activities relating to gathering and using data.
Monitoring is typically:
Monitoring brings evaluative thinking into the periodic collection, analysis and use of information during implementation, as distinct from single discrete evaluation events or even several linked discrete evaluation events (such as a mid-term and final evaluation). Newer forms of evaluation, such as developmental evaluation and real-time evaluation, have blurred this distinction, as they involve ongoing collection, interpretation and use of evaluative data.
Monitoring systems often need to be integrated into the ongoing internal management functions of organisations. These include performance management, risk management and financial management, fundraising and accountability reporting to donors or program participants. The integration can make monitoring a more complicated management endeavour than evaluation.
Monitoring systems often need to operate at levels beyond an individual project, such as the program, organisation, sector or country level. Monitoring systems also sometimes need to work across these boundaries, such as joint monitoring by two or more organisations or supporting partner organisations' systems, such as government systems. Working across systems, levels, and boundaries can make monitoring more complicated due to different understandings, cultures and time-frames.
Another distinction between monitoring and discrete evaluations is that monitoring uses information to manage performance actively and therefore includes deliberate and ongoing reflection to inform implementation decisions.
M&E systems need to consider and balance the information needs of different users. Therefore, it is essential to be clear on how various primary intended users will use monitoring information.
Primary intended users of monitoring information can include project participants, government department staff, project staff, senior management in implementing organisations and government departments, fundraising staff, donor organisations, politicians, and members of the public (as individuals or as part of community-based groups).
Some of the different uses of monitoring information include:
Monitoring of activities, outputs, and outcomes can be conducted at different levels and across multiple entities. For example, monitoring could focus on a single project or on a more extensive program or sector that includes numerous projects delivered by the same organisation or multiple organisations.
Similarly, conditions, progress towards goals, and contextual factors can be monitored within or across local areas, regions, or countries.
The following shows some examples of what might be monitored at different levels.
The BetterEvaluation blog Demonstrating outcomes and impact across different scales discusses how evidence of outcomes and impact can be better captured, integrated and reported on across different scales of work.
Often when people think about designing a monitoring system, they focus on choosing which indicators they will use. However, there are many important tasks around the upfront framing work for the monitoring system. It's important to understand:
Organisations often need to draw together information from multiple sources to help understand progress towards organisational or joint objectives or strategies. This kind of synthesis sometimes needs to be across organisations, at a sector, sub-national or country level. Thinking systemically about M&E extends beyond individual projects and activities.
Understanding the various stakeholders and their needs for monitoring information is essential in designing or refining a monitoring and evaluation system. Diving in without conscious and strategic design and proper resourcing can result in too much emphasis on certain aspects of monitoring at the expense of others.
For example, some organisations primarily see monitoring as an accountability and reporting function. This narrow focus neglects the use of monitoring information to inform management decisions and learn about how change comes about.
Another example is organisations that prioritise their own monitoring needs instead of looking for ways to use and strengthen their partners' monitoring systems. This approach overlooks the potential benefits of drawing on monitoring led by partner governments and communities, which can include:
A good monitoring and evaluation system involves integrating the monitoring function with the evaluation function. Working systematically also means that monitoring should integrate with other management functions, including making timely adjustments to implementation, strategies and plans at the various levels.
Systemic Thinking for Monitoring: Attending to Interrelationships, Perspectives, and Boundaries: This discussion note by Bob Williams and Heather Britt discusses attending to interrelationships, multiple perspectives, and boundaries – one of the three key principles underlying complexity-aware monitoring. This principle emphasises the importance of using systems concepts when monitoring, regardless of whether the monitoring method is drawn from the systems field or is a more traditional monitoring method.
Core concepts in developing monitoring and evaluation frameworks: This guide by Anne Markiewicz and Associates defines the parameters of routine monitoring and periodic evaluation that should take place over the life of an initiative or program, identifying data to be generated, aggregated and analysed on a routine basis for formative and summative evaluation processes that are used to inform organisational decision making and learning.
Linking Monitoring and Evaluation to Impact Evaluation: This guidance note from InterAction on Impact Evaluation, outlines the relationship between regular monitoring and evaluation and impact evaluation with particular reference to how M & E activities can support meaningful and valid impact evaluation.
There are many options as to who undertakes monitoring activities – sometimes, a combination of these is most appropriate.
In local monitoring, the implementing organisation monitors implementation, compliance, risks and results as part of managing implementation. Local monitoring can be participatory, with communities helping to decide what to monitor and collect and analyse information.
At a country, regional or head office level, staff from the same organisation will often monitor implementation, compliance, risks and results as part of managing a portfolio of interventions.
(for example, a UN Agency or international NGO)
Staff from the intermediary organisation will often monitor implementation, compliance, risks and results as part of overseeing a portfolio of interventions at a program, country, regional or global level.
(for example, government, bilateral organisation or foundation)
Staff from the funding organisation will often monitor implementation, compliance, risks and results as part of overseeing a portfolio of interventions at a program, country, regional or global level.
Regardless of who carries out the monitoring function, there are several overlapping aspects and considerations. For example, local teams may make changes in implementation based on monitoring data to improve performance. Monitoring from multiple projects may need to inform other monitoring systems. It might also need to be synthesised to give an overall picture of progress and, therefore, will need to integrate with the overarching monitoring system of the funding organisation. Monitoring may also need to meet the information needs of the funding organisation to comply with funding arrangements and potentially also meet the information needs of the government.
For example, the Australian Government might fund an organisation such as UNICEF to implement a part of a particular UNICEF program that operates in countries of high priority for the Australian Government. UNICEF may then provide grants to local organisations and technical assistance to the government of those countries to implement. The local organisation, the partner government, UNICEF and the funding organisation (the Australian Government) may all undertake monitoring activities. Within each of these organisations, the monitoring information may contribute to other monitoring systems at different levels. Often the various monitoring systems are not harmonised to integrate easily with each other. This already complicated situation is further exacerbated when the local organisation, UNICEF or the partner government receives additional funds from a different funding organisation (such as the EU), which has a different monitoring system again.
When to monitor is an important consideration in designing a fit-for-purpose monitoring system.
One of the characteristics of monitoring is that it is undertaken periodically during implementation. In this way, monitoring can provide real-time information to inform implementation. In contrast, future evaluations may have to try and reconstruct past situations or conditions. But how often and when should monitoring happen?
There is no simple answer as to when the best time to monitor is. It is likely to involve a balance between different time-frames for different users to ensure the timeliness of information to inform decision making.
The following are some helpful questions to consider in choosing when to monitor for which pieces of information in an M&E system (taken from Ramalingam et al., 2019):
Synchronizing monitoring with the pace of change in complexity: This USAID discussion note by Richard Hummelbrunner and Heather Britt argues for synchronizing monitoring with the pace of change as a key principle underlying complexity-aware monitoring.
Yemen: 2021 Humanitarian Response Plan Periodic Monitoring Report, January - June 2021 (Issued October 2021): This example from the UN Office for the Coordination of Humanitarian Affairs shows a six-monthly periodic monitoring report from the ongoing crisis in Yemen, documenting the changing circumstances, the activities and results undertaken, the current assessment of different governorates and funding needs.
As described above, there are a variety of purposes for monitoring information, many of which require systems of reflection.
Systems of reflection often consider successes and failures and the reasons behind them and determine specific actions or steps to be taken as a result. Reflection systems can also extend to reflecting on the overall project strategy and whether this needs revision based on new information.
Systems of reflection can include facilitated conversations, such as after-action reviews or retrospects, and strategy testing exercises. Monitoring visits (sometimes called site visits or field visits) and regular meetings of the core implementation team and potentially others to review a range of evidence can also be a part of systems of reflection.
Strategy testing: This paper by Debra Ladner describes an innovative approach to monitoring highly flexible aid programs: Strategy Testing. This approach was developed by The Asia Foundation and involves reviewing and adjusting the theory of change about every four months in light of monitoring information. It shares some examples and some insights, and reflections on the process.
Revised site-visit standards: A quality-assurance framework: In this journal article, Michael Quinn-Patton proposes 12 quality standards for site visits.
The UNICEF guidance on field monitoring includes some practical steps for planning and using field monitoring.
After Action Review (method page): The After Action Review is a simple method for facilitating reflection on successes and failures and supporting learning. It works by bringing together a team to discuss a task, event, activity or project, in an open and honest fashion.
It can be helpful to decide whether the system to be monitored is complicated or complex to inform the choice of approach to monitoring.
Complicated systems involve multiple components, levels or organisations but are relatively stable and well-understood with the right expertise.
Complex systems involve many diverse components, which interact in adaptive and nonlinear ways that are fundamentally unpredictable. This means that there are ongoing changes in the understanding of how these systems work, how interventions might best work, and what monitoring is needed.
An intervention might be considered simple or complicated when there is a largely straightforward or well-understood relationship between the intervention and its results. Results-Based Management can be a useful approach to use for monitoring these relatively stable and predictable contexts.
Results-Based Management was designed to shift the emphasis from monitoring activities to monitoring results and use what was being learned to adjust activities to achieve better results.It generally answers three types of questions:
Contemporary monitoring systems increasingly incorporate systems thinking and complexity science to respond to situations with much uncertainty, or the situation is changing rapidly.
These approaches include:
USAID has developed the complexity-aware monitoring approach for monitoring programs that contain some complex aspects. Complexity-aware monitoring is appropriate for aspects of strategies, projects or activities where:
For more information on complexity-aware monitoring, see Heather Britt's Discussion Note: Complexity-aware monitoring (2013).
To some extent, all management needs to be adaptive; implementation does not simply involve enacting plans but also modifying them when circumstances or understandings change. However, 'adaptive management' goes beyond normal levels of adaptation. Adaptive management involves deliberately taking actions to learn and adapt as needed under conditions of ongoing uncertainty and complexity.
BetterEvaluation has developed a series of working papers on Monitoring and Evaluation for Adaptive Management. This working paper series explores how monitoring and evaluation can support good adaptive management of programs. While focused especially on international development, this series is relevant to wider areas of public good activity, especially in a time of global pandemic, uncertainty and an increasing need for adaptive management.
Working Paper #1 is an overview of Monitoring and Evaluation for adaptive management. Working Paper #2 explores the history, various definitions and forms of adaptive management, including Doing Development Differently (DDD), Thinking Working Politically (TWP), Problem-Driven Iterative Adaption (PDIA), and Collaboration, Learning and Adaption (CLA). It also explores what is needed for adaptive management to work.
For more information, see BetterEvaluation's adaptive management thematic page.
Making adaptive rigour work: Principles and practices for strengthening monitoring, evaluation and learning for adaptive management: This paper by Ben Ramalingam, Leni Wild and Anne L. Buffardi sets out three key elements of an 'adaptive rigour' approach: Strengthening the quality of monitoring, evaluation and learning data and systems; ensuring appropriate investment in monitoring, evaluation and learning across the programme cycle; and strengthening capacities and incentives to ensure the effective use of evidence and learning as part of decision-making, leading ultimately to improved effectiveness. The short adaptive management Annex is an inventory. It presents the three elements in the form of a series of questions to be asked by those used in designing, developing, implementing and improving monitoring, evaluation and learning systems for adaptive programmes.
Systems concepts in action a practitioner's toolkit: This book, authored by Bob Williams and Richard Hummelbrunner, is focused on the practical use of systems ideas. It describes 19 commonly used systems approaches and outlines a range of tools that can be used to implement them.
Monitoring is difficult to do well. Some of the common challenges and pitfalls include the following:
Many M&E systems focus only on progress towards the achievement of the intended outcomes of specific projects. However, it is also important to make sure your data collection remains open to unintended results, including unexpected negative and positive outcomes and impacts. Wider positive outcomes beyond the project can also be important to monitor.
Many M&E systems focus on outcomes that do not reflect the true intent of the intervention. One example of this is focusing on the number of people reached by a particular program. In contrast, a more meaningful outcome might focus on whether the intended change had taken place.
A useful resource is The Donor Committee for Enterprise Development Standard for Results Measurement, which articulates outcomes in terms of change and also includes a checklist for auditing monitoring systems
Many M&E systems focus data collection on quantitative indicators. However, qualitative data, such as participant stories, observations or other forms of evidence, can often be more valid and useful.
A common pitfall in the reporting of monitoring activities is the presentation of quantitative results without context which renders them meaningless. For example to report that 456 people (50% women) were trained in a topic tells the reader nothing about the quality of the training, the results of the training, whether this was above or below performance expectations, the context in which this took place or why it was important to do the training. Alternatively, narrative reporting can draw on quantitative and qualitative results to tell a meaningful performance story that explains the significance of the numbers in the context and the implications of what these results mean.
Dr Alice Roughley and Dr Jess Dart have also prepared a helpful user guide for the Australian Government: Developing a performance story report. Although prepared for evaluation in the context of natural resource management it is useful for monitoring, and in other contexts. Chapter 6 is particularly helpful in guiding users in how to pull together different types of evidence to write a performance story.
Simply adding up indicators from smaller units to larger units or a whole organisation often does not produce meaningful performance information as those indicators will usually have come about in different contexts. For example, constructing 10km of road in remote Papua New Guinea cannot meaningfully be added to 100km of road construction in an urban setting in a different country.
Other forms of synthesis, and other boundaries beyond the organisation, might be more meaningful and useful. The BetterEvaluation blog on Demonstrating outcomes and impact across different scales includes different methods of synthesis.
There are many ways in which data collection can cause harm to individuals, households or communities. When determining the ethical standards for a monitoring system, data collection methods' risks, burdens, and benefits need to be fully considered. It's important to ensure informed consent of participants is gained and to think through alternatives when data collection methods are not appropriate. For example, repeated and 'extractive' household surveys can cause inconvenience or discomfort to participants. Alternatives to this might include linking into existing M&E systems, such as census data collection, rather than running parallel activities.
While monitoring and evaluation are intrinsically linked, monitoring has historically not been as highly valued or resourced as evaluation. As a result, monitoring is not always recognised as an essential function within organisations. It is often delegated to specialised units which are divorced from management and budget decisions. Elevating the monitoring function in organisations usually needs strong leadership and culture change. Demonstrating the benefits of good monitoring practice, finding champions within the organisation to roll out good practice and advocating for monitoring, including the use of monitoring information, can be effective strategies to help bring about such change.
For further reading on some of these challenges, see:
This discussion note by Bob Williams and Heather Britt discusses attending to interrelationships, multiple perspectives, and boundaries – one of the three key principles underlying complexity-aware monitoring. This principle emphasises the importance of using systems concepts when monitoring, regardless of whether the monitoring method is drawn from the systems field or is a more traditional monitoring method.
This guide developed by Anne Markiewicz and Associates defines the parameters of routine monitoring and periodic evaluation that should take place over the life of an initiative or program
This second guidance note from InterAction on Impact Evaluation, outlines the relationship between regular monitoring and evaluation and impact evaluation with particular reference to how M & E activities can support meaningful and valid impact eval
This paper by Ben Ramalingam, Leni Wild and Anne L. Buffardi sets out three key elements of an 'adaptive rigour' approach: Strengthening the quality of monitoring, evaluation and learning data and systems; ensuring appropriate investment in monitoring, evaluation and learning across the programme cycle; and strengthening capacities and incentives to ensure the effective use of evidence and learning as part of decision-making, leading ultimately to improved effectiveness.
Strategy Testing (ST) is a monitoring system developed by The Asia Foundation to track programs that are using a highly iterative, adaptive approach to address complex development problems.
Nikola Balvin, Knowledge Management Specialist at the UNICEF Office of Research – Innocenti, presents new resources on impact evaluation and discusses how they can be used to support managers who commission impact evaluations.
This initiative focuses on improving the monitoring function as part of a monitoring and evaluation (M&E) systems approach, in which monitoring and evaluation are planned, conducted and used as integrated evaluative activities.
Australian Department of Foreign Affairs and Trade. (2017). Monitoring and Evaluation Standards. Retrieved from: https://www.dfat.gov.au/about-us/publications/Pages/dfat-monitoring-and-evaluation-standards
Britt, H. (2013). Discussion note: Complexity aware monitoring. US Agency for International Development (USAID), Bureau for Policy, Planning and Learning. Retrieved from: https://usaidlearninglab.org/library/complexity-aware-monitoring-discussion-note-brief
Clark, L., & Apgar, J. M. (2019). Unpacking the Impact of International Development: Resource Guide 1. Introduction to Theory of Change. IDS, University of Edinburgh and CDI. Retrieved from: http://archive.ids.ac.uk/cdi/publications/unpacking-impact-international-development-resource-guide-1-introduction-theory-change.html
Clark, L., & Apgar, J. M. (2019). Unpacking the Impact of International Development: Resource Guide 2. Seven Steps to a Theory of Change. IDS, University of Edinburgh and CDI. Retrieved from: http://archive.ids.ac.uk/cdi/publications/unpacking-impact-international-development-resource-guide-2-seven-steps-theory-change.html
Clark, L., & Apgar, J.M. (2019). Unpacking the Impact of International Development: Resource Guide 4. Developing a MEL Approach. IDS, University of Edinburgh and CDI. Retrieved from: http://archive.ids.ac.uk/cdi/publications/unpacking-impact-international-development-resource-guide-4-developing-mel-approach.html
Clark, L., & Small, E. (2019). Unpacking the Impact of International Development: Resource Guide 3. Introduction to Logframes. IDS, University of Edinburgh and CDI. Retrieved from: http://archive.ids.ac.uk/cdi/publications/unpacking-impact-international-development-resource-guide-3-introduction-logframes.html
Dillon, N. (2019). Breaking the Mould: Alternative approaches to monitoring and evaluation. ALNAP Paper. London: ODI/ALNAP. Retrieved from: https://www.alnap.org/help-library/breaking-the-mould-alternative-approaches-to-monitoring-and-evaluation
Dillon, N., & Sundberg, A. (2019). Back to the Drawing Board: How to improve monitoring of outcomes. ALNAP Paper. London: ODI/ALNAP. Retrieved from: https://www.alnap.org/help-library/back-to-the-drawing-board-how-to-improve-monitoring-of-outcomes
The Donor Committee for Enterprise Development. (n.d.). DCED Standard for results measurement. Retrieved from: https://www.enterprise-development.org/measuring-results-the-dced-standard/
Guijt, I., & Woodhill, J. (2002). Managing for Impact in Rural Development, A Guide for Project M&E. IFAD. Retrieved from: https://www.ifad.org/documents/38714182/39723245/Section_2-3DEF.pdf/114b7daa-0949-412b-baeb-a7bd98294f1e
Hummelbrunner R., & Britt H. (2014). Synchronising Monitoring with the Pace of Change in Complexity. US Agency for International Development (USAID), Bureau for Policy, Planning and Learning. Retrieved from: https://www.betterevaluation.org/en/resources/synchronizing-monitoring-pace-change-complexity
Mayne, J. (2007). Challenges and lessons in implementing results-based management. Evaluation, 13(1), 87-109. https://journals.sagepub.com/doi/abs/10.1177/1356389007073683?journalCode=evia
Mayne, J. (2004). Reporting on outcomes: Setting performance expectations and telling performance stories. Canadian Journal of Program Evaluation, 19(1), 31-60. Retrieved from: https://evaluationcanada.ca/secure/19-1-031.pdf
Patton, M. Q. (2017). Revised site-visit standards: A quality-assurance framework. In R. K. Nelson, & D. L. Roseland (Eds.), Conducting and Using Evaluative Site Visits. New Directions for Evaluation, 156, 83–102. https://onlinelibrary.wiley.com/doi/abs/10.1002/ev.20267
Peersman, G., Rogers, P., Guijt, I., Hearn, S., Pasanen, T., & Buffardi, A. (2016). 'When and how to develop an impact-oriented monitoring and evaluation system'. A Methods Lab publication. Overseas Development Institute. Retrieved from: https://www.betterevaluation.org/en/resource/discussion-paper/ML-impact-oriented-ME-system
Ramalingam, B. Wild, L. & Buffardi, A. (2019). Annex | Making adaptive rigour work: the adaptive rigour inventory – version 1.0. Overseas Development Institute. Retrieved from: https://odi.org/en/publications/making-adaptive-rigour-work-principles-and-practices-for-strengthening-mel-for-adaptive-management/
Ramalingam, B. Wild, L. & Buffardi, A. (2019). Making adaptive rigour work: Principles and practices for strengthening monitoring, evaluation and learning for adaptive management. Overseas Development Institute. Retrieved from: https://odi.org/en/publications/making-adaptive-rigour-work-principles-and-practices-for-strengthening-mel-for-adaptive-management/
Rogers, P. (2020). Real-time evaluation. Monitoring and Evaluation for Adaptive Management Working Paper Series, Number 4, September. Retrieved from: https://www.betterevaluation.org/en/resources/real-time-evaluation-working-paper-4
Rogers, P. & Macfarlan, A. (2020). An overview of monitoring and evaluation for adaptive management. Monitoring and Evaluation for Adaptive Management Working Paper Series, Number 1, September. Retrieved from https://www.betterevaluation.org/resources/overview-monitoring-and-evaluation-adaptive-management-working-paper-1
Rogers, P. & Macfarlan, A. (2020). What is adaptive management and how does it work? Monitoring and Evaluation for Adaptive Management Working Paper Series, Number 2, September. Retrieved from: https://www.betterevaluation.org/en/resources/what-adaptive-management-and-how-does-it-work-working-paper-2
Sundberg, A. (2019). Beyond the Numbers: How qualitative approaches can improve monitoring of humanitarian action. ALNAP Paper. London: ODI/ALNAP. Retrieved from: https://www.alnap.org/help-library/beyond-the-numbers-how-qualitative-approaches-can-improve-monitoring-of-humanitarian
UNICEF. (2017). Results Based Management Handbook: Working Together for Children. Retrieved from: https://www.unicef.org/rosa/media/10356/file
Williams B., & Britt, H. (2014). Systemic Thinking for Monitoring: Attending to Interrelationships, Perspectives, and Boundaries. US Agency for International Development (USAID), Bureau for Policy, Planning and Learning. Retrieved from: https://usaidlearninglab.org/sites/default/files/resource/files/systemic_monitoring_ipb_2014-09-25_final-ak_1.pdf
Williams, B., & Hummelbrunner, R. (2010). Systems concepts in action: A practitioner's toolkit. Stanford University Press. https://www.sup.org/books/cite/?id=18331