Our services
Build and run systems to support strategy, optimization, learning processes, and impact measurement
Provide technical advisory services to scope new systems or enhance existing systems
Our approach
While experienced in traditional MEL approaches, we believe that measurement, learning and evaluation systems deliver the most value as a multi-disciplinary practice of evaluative methods, design thinking, and computer science. We bring this into our work through the following practices:
- Value a robust theory of change to reflect on strategy, develop measurement systems, and communicate the intended impact
- Build flexible systems for program adaptation, and minimize the impact on measurement systems.
- Commitment to evaluative rigor while understanding real-world constraints. We use the best-fit-based method based on available evidence, timelines, data, and resources.
- Strive for lean data collection, measure what matters, and value collaboration time.
- Prioritize measurement processes that generate insights when they are needed.
- Optimize digital data and tools when they support a cost-effective, greener, and rapid feedback cycle.
- Invest in compelling visuals and interfaces to capture attention and enhance engagement in insights.
- Use multi-disciplinary teams to draw on multiple perspectives and enrich our measurement approach
Our portfolio
Measurement, learning and evaluation system design, and implementation
Flexible system design to enhance evidence based strategy and program or product optimization.
Our approach enables us to design and run impactful measurement, learning, and evaluation systems that can accommodate early-stage pilots to large (50+ grantee) multi-country programs using lean-focused teams.
Our support spans theory of change development to designing impact assessments—using both experimental and non-experimental methods. Consistent across our portfolio is tailored, right-time advisory to partners and grantees to ensure insights are gathered and used.
To enhance learning, we have supported grantees to align to learning agendas and create early feedback loops for product/service validation through methods including; A/B tests, dipstick surveys, and digital focus groups.
Click here for some recent examples
Below are a few recent examples of end-to-end measurement, learning, and evaluation support for large and dynamic programs.
- Strive Community: 4-year program with Mastercard Center for Inclusive Growth, to support small business growth through 4 levers—digital tools/data, financial services, markets, and upskilling. 50+ grantees/partners are supported on measurement, learning, and evaluation in 22 countries. Features of the system include a real-time dashboard; embedded evaluation; A/B testing; innovation fund impact synthesis; cost per reach; interactive evidence map; content library for strategy; and digital grantee reporting systems.
- European Space Agency Global Development program: 5-year program to mainstream earth observation data in development operations over 8 thematic areas. 40 grantees, covering 55 countries, are supported to gather data and insights to enhance their product offerings. Features of the system include flexible design to support agile product development; no-code dashboard; Qualitative Comparison Analysis; and briefs on emerging opportunities.
- Next Generation Financial Services (NGFS) Learning Partnership: As the learning partner for Mastercard Foundation’s NGFS, we provided a menu of measurement, learning, and evaluation services to the portfolio—which included UNCDF, CGAP, Accion, GSMA, and Bankable Frontiers—and supported synthesis and amplification of the portfolio’s insights through multi-media products and events.
- UK Space Agency International Partnership Program (IPP): We managed measurement, learning, and evaluation for the 5-year, £152M program funding 180 grantees to develop products using space technology across 44 countries. Features of the system included: cost-effectiveness analysis of portfolio products; mixed methods evaluations; thematic reports on opportunities; digital library on space applications. This work was externally evaluated to be effective.
Evidence synthesis
Supporting evidence-based strategy, thought leadership, and, responsible research through evidence maps and narrative evidence synthesis.
We use an array of visual and narrative-based methodologies for evidence synthesis tailored to use cases.
Click here for some recent examples
- Evidence maps: These chart the landscape of impact evidence for a set of interventions plotted against outcomes. The interactive design lets users scan for evidence on different levers. These have been generated for digital financial services and digital small businesses.
- Living insights pages: Updated insights pages to amplify portfolio learnings across the program’s levers.
- Learning snapshots: 16 snapshots to capture current insights on client’s learning agenda. Each highlighting “notable new learning” and calling attention to implications for research and investment.
Data and knowledge management
Curating and managing portfolio and ecosystem data to optimise program performance and support ecosystem influencing.
Using dashboards, libraries, and reporting systems, we curat, manage, and use program and ecosystem data and insights to inform new programs, adjust strategy, optimize products, and support influencing efforts.
Click here for some recent examples
- Internal content library: Airtable of 700+ publications on program levers. Content is tagged to support detailed search and filtering, is updated weekly, and linked to our evidence synthesis processes.
- Online knowledge hub: An online knowledge hub to host portfolio and ecosystem learnings.
- Dashboards: We have built dashboards using Tableau, Power BI, Metabase, Grow, Looker Studio, and custom Python web applications. We have also built direct data integrations with partner systems such that the dashboard can show dynamically updated data for some indicators.
- Reporting: We design reports with a clear use case, and we optimize for digital systems.
Impact measurement
Providing clients with right time insights to support evidence-based decisions
We use various methodologies and encourage best-fit based on available evidence, timelines, data, and resources. We adhere to the highest technical standards for the methods used and are experienced in process, impact, and economic evaluations.
Click here for some recent examples
- External evaluation of the 26M ethics and governance of AI Initiative, implemented by MIT and Harvard and supported by Luminate, Reid Hoffman, Knight Foundation, and the Hewlett Foundation. Mixed methods approach including retrospective theory of change building and outcome harvesting
- Internal evaluations within the Strive Community: Embedded/developmental evaluation deployed in modules as data needs arose on specific topics. The embedded modular approach is leaner, focused, and less resource intensive; has a shorter time frame between data collection → insights.
- Internal evaluations within ESA GDA: Mixed method evaluations throughout the 4 year program forefronting Qualitative Comparison Analysis
- Internal evaluations within the UK Space agency IPP: Mixed method evaluations including synthesis of 40+ project data and evaluations, product/services cost-effectiveness analysis, and economic return to the UK
Measurement, learning, and evaluation advisory
We are experts at providing MLE Technical Assistance (TA) and translating requirements across diverse organizations.
Our work has supported governments, foundations, NGOs, research institutions, academia, social enterprises, and commercial companies. Our approach to TA has included:
System discovery; Tailored recommendations for integrating minimal measurement requirements; Value proposition memos to support the business case for data gathering and analytics; Development of guidance notes; and onboarding and mentoring sessions