Capacity Building
The Monitoring, Evaluation, Learning, and Innovation (MELI) Unit aims to equip Bureau of Educational and Cultural Affairs (ECA) program teams, implementing partners, and other State Department stakeholders with the knowledge, skills, and tools they need to enhance public diplomacy programming and better serve our participants. Whether stakeholders are new to monitoring & evaluation (M&E) or experienced practitioners, the MELI Unit offers a wide range of capacity building initiatives designed to enhance M&E skills and competencies.
We invite you to explore our initiatives and take advantage of the MELI Unit’s resources and training opportunities!
Community of Practice
The ECA MELI Unit hosts a monthly Monitoring & Evaluation Community of Practice for ECA staff, ECA implementing partners, U.S. embassy staff, and other interested stakeholders working in the public diplomacy sector to explore, share, and learn about best practices, challenges, lessons learned, and innovations around monitoring and evaluating ECA and public diplomacy programs. The Community of Practice meets the second Thursday of every month at 1:30PM EST.
If you are interested in joining the ECA M&E Community of Practice, please email us at ECAevaluation@state.gov.
You can find select recordings of our Community of Practice below:
- ECA M&E Community of Practice – September 12, 2024
- ECA M&E Community of Practice – August 1, 2024
- ECA M&E Community of Practice – April 18, 2024
- ECA M&E Community of Practice – April 11, 2024
Monitoring & Evaluation Online Training Program
Welcome to the Monitoring, Evaluation, Learning, and innovation (MELI) Unit’s monitoring & evaluation (M&E) online training program! This comprehensive training is designed to introduce participants to the essential principles, methodologies, and tools of M&E.
Level: Beginner/Intermediate
Prerequisites: No prior knowledge of M&E is required for these courses
Audience: Public diplomacy practitioners seeking to build foundational knowledge in M&E
Through this training, participants will:
- Develop a solid understanding of the key concepts, principles, and terminology of M&E.
- Acquire knowledge of different M&E tools and approaches.
- Learn practical techniques for implementing M&E.
- Understand the role of M&E in supporting evidence-informed decision making and program improvement.
The training program is comprised of a series of self-paced "mini courses", which can be accessed either in the structured order listed below or ad-hoc, depending on learning needs. Each mini course includes an assessment, which offers you the chance to put your newfound knowledge into action, reinforcing your understanding of the topics covered and boosting your skills. Each assessment is comprised of 10 questions and is scored, with 70% or better considered passing. Remember, the assessments are completely optional and anonymous. You may take each assessment multiple times.
Concepts of Monitoring & Evaluation (and Learning!)
- Module 1: Performance Monitoring (Video)
- Module 2: Evaluation (Video)
- Module 3: Learning (Video)
- Module 4: Why do we care? (Video)
Strategic Planning
- Module 1: Theories of Change (Video)
- Module 2: Logic Models (Video)
- Module 3: Objectives & Indicators (Video)
- Module 4: Performance Monitoring Plans (Video)
Data Collection
Data Analysis & Reporting
- Module 1: Data Cleaning (Video)
- Module 2: Quantitative Data Analysis (Video)
- Module 3: Qualitative Data Coding (Video)
- Module 4: Data Visualization & Reporting (Video)
Program Learning
- Module 1: Internal Decision Making (Video)
- Module 2: Learning Agendas (Video)
- Module 3: External Sharing (Video)
Webinar Recordings
- Introduction to M&E (January 2020) (Video Webinar)
- Program Design and Logic Models (February 2020) (Video Webinar)
- Theories of Change and Logic Models (March 2021) (Video Webinar)
- Theories of Change and Logic Models – Part 2 (August 2023) (Video Webinar)
- Creating and Evaluating Surveys (May 2020) (Video Webinar)
- Creating and Evaluating Surveys Webinar - Part 2 (August 2021) (Video Webinar)
- Survey Data Cleaning and Analysis (June 2020) (Video Webinar)
- Qualitative Data Analysis (July 2020) (Video Webinar)
- Data Visualization (August 2020) (Video Webinar)
- Data Visualization Seminar - Part 2 (September 2021) (Video Webinar)
- Success Case Method (September 2020) (Video Webinar)
- Needs Assessments (October 2020) (Video Webinar)
- Journey Mapping (July 2021) (Video Webinar)
- Ethics and Informed Consent (October 2021) (Video Webinar)
- Facilitation (June 2022) (Video Webinar)
- Using Data for Learning (August 2022) (Video Webinar)
Other Resources
Monitoring and Evaluation Key Terms
Glossary of key monitoring & evaluation terminology. Developed by: ECA MELI Unit
Evaluation Flash Cards
A series of reference “cards” that summarize core evaluation concepts. Developed by: Otto Bremer Foundation
What are SMART Indicators?
A brief overview of what SMART means in various organizations. Developed by: ECA MELI Unit (through multiple resources)
Creating a Results-Based M&E System
The focus of the handbook is a comprehensive ten-step model that will help guide you through the process of designing and building a results-based M&E system. These steps will begin with a “Readiness Assessment” and will take you through the design, management, and, importantly, the sustainability of your M&E system. The handbook will describe these steps in detail, the tasks needed to complete them, and the tools available to help you along the way (Preface, xii). Developed by: The World Bank
M&E of Virtual Exchange Programs
This report examines the extent to which ECA can achieve its programmatic outcomes through virtual exchanges (VE), and whether ECA needs new monitoring and evaluation (M&E) approaches to capture those effects. A literature review revealed that the primary outcomes of VE programs mirror traditional exchange programs, with emphasis on building cross-cultural competencies, language capacities, and personal/professional networks. It is quite likely these effects are less profound than in-person exchanges, but the existing literature provides no empirical basis to make such comparisons. Information technology and logistics can challenge VE programs; at the same time, the programs add value in areas like building digital competencies. The report also proposes new optional indicators to help tease out the unique outcomes of VE programs. Developed by: ECA MELI Unit
Creating Surveys - Tips and Best Practices
This manual was created to assist stakeholders to design surveys that help collect the information needed from program participants and other stakeholders to improve programming and demonstrate results. Surveys that are well-designed lead to data that are valid and reliable – which helps to highlight successes and determine if and where changes are needed for improvements. Developed by: ECA MELI Unit
Data Cleaning Guide
This guide is meant to serve as an introduction to survey data cleaning and analysis for individuals who are new to working with data or need a refresher. It provides tips and guidance on how to conduct basic data analysis of survey data. Developed by: ECA MELI Unit
Survey Response Rates Paper
Nonresponse bias is defined as a correlation between one or more survey variables and propensity to respond (Wolf 2016). It is based on the recognition of important differences between those who respond to a survey and those who do not, such that the results from those who respond present a distorted picture of what one is trying to measure. Nonresponse bias is a consideration in all survey distribution methods, whether census-based (i.e. sent to the entire population in question) or a random sample. This paper looks at common approaches for increasing response rates in order to combat nonresponse bias, including altering the survey invitation, opting for an alternative administration method, and offering incentives. Developed by: ECA MELI Unit