Supported by
How are organizations approaching AI, responsibly? How do development organizations evaluate the impact of AI?
This project, supported by Canada’s International Development Research Centre (IDRC) and the Foreign, Commonwealth & Development Office of the United Kingdom (FCDO), documents promising use cases of responsible artificial intelligence (AI) in international development. It draws on interviews with leaders across eight different organizations in health, education, agriculture, and human rights. Explore the chapter overviews below.
1) askNivi: A retrieval-based chatbot improving access to local healthcare
Digital health company Nivi’s chatbot, askNivi, provides relevant healthcare information to users and connects them with nearby trusted healthcare providers for continued care. askNivi is a retrieval-based bot that uses the interpretive and categorization power of large language models to provide users with tailored content. Operating from a state of caution about using generative AI for healthcare, the Nivi team has leveraged generative AI in a way that doesn’t expose users directly to generated content, and the risks associated with that. They have deployed this feature responsibly, keeping humans in the loop and proceeding slowly.
Responsible AI: Do no harm. Pursue fairness for all. Keep humans in the loop. Go slowly.
2) HURIDOCS: An AI-optimized database lowering barriers to information for human rights defenders
Human rights partners Human Rights Information and Documentation Systems (HURIDOCS) and UPR Info developed a supervised machine learning tool that processes and automatically tags key human rights documents with specific categories. The feature, built on top of HURIDOCS’s open-source database software Uwazi, has increased informational accuracy and reduced workload for UPR Info staff, allowing the team to reallocate scarce resources to supporting UPR’s advocacy programs.
Responsible AI: Integrate manual evaluation. Experiment with AI on low-risk issues. Build an approach based in human rights.
3) PROMPTS: A messaging platform strengthening primary healthcare in Kenya
Kenyan healthcare nonprofit Jacaranda Health created PROMPTS, a digital health tool that has empowered three million Kenyan mothers with information via SMS to seek and connect with care at the right time and place. Since launching the tool in 2017, Jacaranda has built and customized the AI that underpins PROMPTS to improve its efficiency, speed, and personalization for mothers, at scale. The AI component of the platform has evolved from a natural language processing model to a sophisticated Swahili-speaking large language model that provides tailored information to mothers based on user profiles and rapid triaging to a human help desk agent if a risk is identified during the exchange.
Responsible AI: First, do no harm. Respect user data. Design for equity and local context. Keep humans in the loop. Share with the community of practice. Focus on sustainability.
4) EIDU: An AI-powered learning platform using content personalization to maximize learning in Kenya
EIDU, a social digital learning technology company, integrated an AI personalization algorithm into its digital learning platform that suggests learning content for individual students. The tool, used to support traditional classroom learning, helps teachers identify learning content based on the probability that a student will be able to complete it. It has enabled individual learners to learn at their own pace and in formats that fit their style of learning, ultimately maximizing students’ learning outcomes. Introducing the AI-based personalization algorithm has also increased digital literacy and scaled up learning successes.
Responsible AI: Continuously evaluate uses of AI. Recognize the bias in AI. Establish reliability for users.
5) Farmer.Chat: An AI-powered digital assistant supporting agricultural extension workers around the world
Farmer.Chat is a generative AI assistant developed by nonprofit Digital Green that provides agricultural extension workers with timely, locally tailored agronomic information. The assistant, operational in India, Kenya, and Nigeria, harnesses the capabilities of large language models to fill in agronomic knowledge gaps amongst extension workers. Farmer.Chat has reduced the significant burden that underpaid and overworked extension workers face in their efforts to support smallholder farmers increase productivity and income.
Responsible AI: Create foundational values for the use of AI. Keep humans in the loop.
6) RobotsMali: An education project using large language models to encourage national language literacy in Mali
RobotsMali, a Malian nonprofit, used AI tools to create locally relevant school books in Bambara, one of Mali’s local languages. Though an estimated 80% of Mali’s population speak Bambara, few Malians can read or write the language. RobotsMali created books, complete with AIgenerated images, to help students learn to read and write the language. The team was able to improve Bambara literacy among primary and secondary students through in-person lessons with these materials. The project also improved efficiency and digital literacy among RobotsMali staff.
Responsible AI: Promote AI for Africa. Amplify generative AI with human ability. AI ethics are for everyone.
7) ACADIC: Predictive AI tools implementing new ways to track disease spread on the African continent
The Africa-Canada AI & Data Innovation Consortium (ACADIC), a global interdisciplinary consortium, researches and develops new methods to apply existing machine learning techniques to new public health problems. The team came up with various tools, including a COVID-19 hotspot detection tool, an early warning system to notify governments of potential surges, and public dashboards, all powered by AI. Their work was used by country governments to inform COVID-19 policy and seen by millions of people globally.
Responsible AI: Create guiding values. Combat inherent biases to prevent misuse.
8) Plantix: A mobile app employing image recognition to improve farmers’ productivity in India
Plantix is an image recognition mobile application, built by its eponymous technology company, that helps smallholder farmers improve crop yields and, ultimately, increase income and improve their livelihoods. The tool, intended to be used directly by farmers, processes user-uploaded pictures of crop diseases and pests and suggests solutions to remedy the issues. The tool has global reach, thanks in part to Plantix local research partner, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT). ICRISAT provides training on the app to its network of smallholder farmers and develops new training sets to improve the accuracy and scope of the tool.
Responsible AI: Create guiding values. Build community among users.
Conclusion
The work on responsible AI in practice reveals emerging strategies to help organizations navigate the power and vulnerabilities of opaque, probabilistic AI technologies. A key opportunity is to strengthen the dialogue and exchange between implementing organizations and the global bodies publishing general principles of responsible AI.
The work on impact in practice underscores how most organizations are confident they are driving impact, using A/B testing and other forms of measurement to refine their processes. This unique moment allows them to recall and contrast their processes before using AI, although a key challenge remains in translating confidence and qualitative data into quantitative, generalizable evidence.
Within these challenges lie opportunities. Resolution requires greater community and connection across the value chain and within the broader AI development community. The actions of implementing organizations— contributing to DPIs, DPGs, and best practices—shape the emerging political economy of AI, especially as it intersects with the development sector.