This week, we enrolled our 2 millionth mother to PROMPTS, our AI-enabled digital health service. The scale feels hard to comprehend. PROMPTS started out in three facilities in Kiambu County.
Now, mothers sign up to the service in 1,110+ public facilities across 20 Kenyan counties, and early operations in Ghana and Eswatini signal the start of a new global chapter.
We’ve qualified this growing scale with impact; through rigorous research and the firsthand stories from the mothers we meet in the field. Mums like Mary who shared feedback on her experiences of care with PROMPTS over 30 times because she believed in its potential to drive real improvements in her local clinic. Or women like Beth who, off the back of a referral message, decided to head to hospital that night instead of waiting until morning, saving her and her baby’s life.
Mary, Beth, and our 2 million other PROMPTS mums represent different demographics, geographies and stages of pregnancy with, accordingly, different, evolving needs. Our journey to scale is, therefore, characterized by a ‘test, adapt, and test again’ approach, learning directly from end users, using data to inform changes, and maintaining an open line of communication with our government partners. Here’s three things we’ve learnt along the way.
1. Find a balance between impact versus scale
There’s a tricky dichotomy between driving towards impact versus effectively scaling that impact. These two pathways can look very different. Over the years, we’ve learnt the value of ruthlessness; cutting what might initially deliver impact but isn’t practical for scale-up. We’ve found that an upfront awareness of context is critical to this; both in terms of its strengths (‘is my solution offering something better/ cheaper/ easier for mums than what they already have access to?), and its limitations (‘is there sufficient infrastructure/ demand / services to make my solution work at scale?’)
Two examples define this conflict. During COVID-19, we rolled out ‘virtual appointments’ on PROMPTS as a basic screening mechanism to soften the impact of reduced hospital access. It worked – briefly. Scaling it revealed that not only did many reported issues require in-person visits anyway, but also that, once clinics opened, we were compromising demand for in-person appointments with a less effective alternative. On the other hand, contextual limitations have halted plans to scale maternal counseling support on PROMPTS. While the platform is set up to identify mums with mental health issues (eg. postpartum depression), without a broad network of free counseling services in Kenya, we can’t (currently) ensure the right pathway of support.
2. ‘Active listening’ helps adapt solutions as they scale
One perpetual question is the timing of scale-up. On the one hand, it’s dangerous to let solutions languish in the research world. On the other, scaling too quickly might mean launching into a context that isn’t fully understood. ‘Active listening’ ensures a solution can continuously adapt to the evolving needs and context of a growing number of end users – mitigating drop-off and maintaining relevance.
An example of this is how we adapt PROMPTS messages. We know we create more value for mums if messages are relevant to their context and condition, reach them at the right time in pregnancy, and use interpretable language. So after each message we ask ‘Did this answer your question?’. Last year we noticed for nutrition-related questions, 31% of mothers responded ‘no’ promoting us to train our AI to offer more specific answers (eg. egg-related questions get a response specific to the food group vs. a generic ‘nutrition’ response).
Similarly, asking questions like ‘What do you think happens during delivery?’ helps gauge knowledge gaps in real-time and understand the terminology mothers relate to. Our next challenge is to build this level of specificity at scale, using the millions of questions and answers from historical messaging to generate more ‘personalized’ AI support. (Learn more about our active listening approach).
3. Design, build, and iterate locally
There’s a common misconception that building complex tech-based systems requires a vast team and a large budget. Our AI-enabled PROMPTS helpdesk proves the opposite. PROMPTS was initially designed as a one-way information service, but we hadn’t anticipated these mothers messaging back. We quickly saw potential for a wider use case; supporting mums reporting a risk and gathering client-side data to improve care quality. As incoming questions grew from three a week to 100 a day, we needed to rapidly develop software that could cater for this growing volume.
Our tech team of two at the time stepped in, customizing off-the-shelf models to create a machine learning infrastructure suitable for our context. Doing this in-house meant PROMPTS could be:
- Rapidly customizable to sustain impact across multiple use cases
- Smple to ensure its maintenance by small local team without the need for lots of external partners
- Cheap, to support more mums for the same cost, or less – and, in time, ensure its uptake within government budgets.
There’s no one-size-fits-all solution to scale-up.
But for any organization serving real people in complex contexts, the principles of rigorous/ continuous testing and open conversations with end users ring true. As we chart towards new milestones through PROMPTS, we’d love to hear and learn from other organizations scaling solutions in this field. Get in touch: [email protected].