Recently, I had a great conversation on the Social Norms Chat podcast about something I’ve become increasingly passionate about: how artificial intelligence can transform how we design and deliver social and behaviour change programmes. If you’re working in international development or any field where understanding human behaviour matters, this is worth paying attention to.
Let me be clear upfront: I’m not a technical person. I didn’t come to AI from computer science. I came to it from a very practical place: frustration with how much valuable knowledge sits unused in our sector, and excitement about finally having tools that could change that.
When I work with AI tools, I think about them in two distinct ways, and this framing has been incredibly helpful for teams I work with.
First, think of AI as a very powerful calculator. It excels at recognising patterns, segmenting data, linking concepts, and analysing vast amounts of information that would be impossible for humans to process effectively. This is genuinely transformative for international development work. Consider how many evaluations are produced each year that nobody really reads. How many project reports gather digital dust. We have enormous amounts of knowledge that we simply don’t engage with properly. AI can change that fundamental problem.
Second, think of AI as a capable coworker. This is where the creative applications live. When I need to develop communication products, design dashboards, or create visual representations of complex data, AI becomes an incredibly useful collaborator. It’s not about replacing human expertise but rather about extending what we can accomplish with the time and resources we have.
The key distinction here matters: one is about making sense of what already exists (the calculator function), and the other is about creating something new (the coworker function). Both are valuable, but they serve different purposes in our work.
One of the biggest changes I’ve seen is how AI allows us to move away from the traditional model of static reports. You know the pattern: months of work go into research, it gets delivered as a PDF, and then it sits on someone’s hard drive, rarely opened again.
Instead, AI enables us to create living, interactive insights. I’ve worked with teams to build custom chatbots trained on specific research data. Rather than reading a 50-page report, stakeholders can ask questions directly: “What did young people in Nairobi say about contraception access?” or “Show me the key barriers to handwashing in rural settings.” The bot can pull relevant quotes, synthesise themes, and present findings in exactly the format needed for that moment.
This fundamentally changes how knowledge gets used. It becomes accessible, searchable, and actionable in ways that traditional deliverables simply cannot match.
Let me share some concrete examples of how organisations are already using AI effectively in behaviour change work:
Farmer Chat (developed by Digital Green) creates personalised agricultural advice for smallholder farmers. The system synthesises enormous amounts of agricultural research, weather data, and local context to provide tailored guidance. This is AI working at scale to democratise expertise that was previously accessible only to a privileged few.
UNDP Panama used UrbanistAI is a generative AI tool that allows users to visualise and render urban design ideas in real-time. By integrating this technology into their workshops, they enhanced the participatory process, making it more dynamic and inclusive and by promoting a sense of ownership with the results. The AI tool helped bridge the gap between ideas and reality, by giving instant visual feedback and facilitating a more engaging dialogue between participants.
Jacaranda Health in Kenya is using AI to analyse behavioural data from their maternal health programmes. They’re identifying patterns in when and why women engage with services, which is informing more responsive programme design. The AI isn’t making decisions, it’s revealing insights that human experts then act upon.
Before you rush off to experiment with ChatGPT on your project data, we need to talk about data protection. This is non-negotiable.
Never, and I mean never, upload research data, evaluation findings, or any information about communities into free AI tools. These tools use inputs to train their models. You would essentially be giving away sensitive information about vulnerable populations.
For professional use, you need:
Many large organisations now have AI governance frameworks in place. If yours doesn’t, please reach out to me so I can point you in the right direction. The data protection risks are real and serious.
One of the most common concerns I hear is: “But AI is biased. How can we use it responsibly in global development work?”
This is a valid concern, and here’s my response: yes, AI models carry biases, particularly Western biases, because they’re trained predominantly on Western data. But the solution isn’t to avoid AI entirely. It’s to remain the expert in the room.
AI should inform your thinking, not replace it. When I use AI to analyse qualitative data or develop intervention designs, I’m constantly checking its outputs against my knowledge of the context, the community, and the theoretical frameworks that should guide behaviour change work. The AI might surface patterns I’d missed, but I’m the one who determines whether those patterns are meaningful and appropriate.
I’m also encouraged by emerging companies like Lelapa AI in South Africa, which are building models specifically for African contexts, trained on African languages and data. Supporting these companies matters. We need more diverse voices in AI development.
If you’re feeling overwhelmed, here’s a practical starting point:
Pick one small, contained task in your current workload. Maybe it’s analysing feedback from a recent workshop, or creating a summary of key literature on a topic, or developing different versions of a communication message to test with audiences.
Try using AI for that one task. See what works and what doesn’t. Document your process. Share what you learn with colleagues. Then iterate.
I learnt by doing. When ChatGPT became publicly available, I created my first custom GPT within 10 days, not because I’m technically skilled, but because I was willing to experiment. Some things worked brilliantly. Others failed spectacularly. Both were valuable learning experiences.
Here’s where I think we’re heading, and where I hope we’ll get faster: imagine having large language models specifically trained for individual countries in the Global South. Models that understand local languages, customs, social norms, and contexts intimately.
These wouldn’t be general-purpose tools trying (and often failing) to understand everywhere. They’d be deeply contextual, trained on local research, local languages, and local knowledge systems. They could support programme design that’s genuinely responsive to community needs rather than imposed from outside.
We’re not there yet, but the technical capability exists. What we need is investment and commitment to building these tools in partnership with local researchers, institutions, and communities.
I want to end with something that’s been troubling me recently. I’ve been working on projects where other consultants submit AI-generated content that sounds professional but lacks substance. The grammar is perfect, the structure is clean, but the nuance, the so-what, the contextual understanding is completely missing.
This is a trap we need to avoid. AI can help us write more clearly, translate between languages, or structure our thinking. But it cannot replace the expertise that comes from lived experience, deep contextual knowledge, and genuine understanding of the communities we work with.
If English isn’t your first language, I understand the temptation to use AI to make your writing sound more “professional”. But please, don’t let AI strip away what makes your contribution valuable: your unique perspective, your contextual expertise, and your authentic voice. These are far more important than grammatically perfect sentences.
Whatever you do with AI, document it. When I develop custom GPTs for projects, I include the full instructions as an annex in my final deliverables. When I use AI to analyse data, I note exactly what tools I used and how.
This isn’t just good practice, it’s essential for replication, learning, and accountability. AI tools are evolving rapidly. What worked six months ago might not work the same way today. Documentation ensures we can learn from both successes and failures.
AI isn’t going away. It will become increasingly embedded in how we work. The question isn’t whether to engage with it, but how to engage thoughtfully, ethically, and effectively.
For those of us working in social and behaviour change and international development, AI offers genuine opportunities to work smarter, reach more people, and create more responsive programmes. But it requires us to be critical, careful, and committed to keeping human expertise at the centre.
The technology is powerful, but it’s just a tool. Our judgement, our values, our commitment to the communities we serve remain the most important elements of our work.
I have developed a detailed framework to help impact teams embed and work with AI. If you are interested in working with me on this, please reach out to me.
This blog post is based on my conversation on the Social Norms Chat podcast.
👉🏾 How skilled are you at designing for change? Start with the FREE assessment: https://lnkd.in/dK7YPKgR
👉🏾 Join my mailing list for exclusive insights and content! subscribe.osmanadvisoryservices.com