Edit Content

Jacaranda’s Principles for Responsible, Person-Centered AI

Ensuring we remain safe, responsible, and focussed on our mothers in the design and deployment of AI-led solutions.
JACARANDA PROMPTS PHOTOS (7)
JACARANDA PROMPTS PHOTOS (7)

As our programs have scaled, we are increasingly using AI to build reliable, cost-efficient services to serve millions of users at scale. The field is moving rapidly, and as an organisation committed to the highest standards of development and medical ethics, we wanted to establish principles for responsible, person-centered AI and data science, as below.

1. First, do no harm

We work in healthcare, and put the health of the client and community first. Our use of AI is intended to improve the quality and safety of the services that we provide our clients. This means that our threshold for risk is governed by the potential outcome for the client: will this tool help an individual get better care, faster?

2. Respect user data

We are rigorous about the privacy and safety of the women who rely on our services to direct their pregnancies. We take consent seriously, applying design approaches to ensure the interpretability of our double opt-in for different users, and transparency around where data is used. We do not collect personally identifiable information, bar phone numbers, and store it on secure cloud servers, with tight security measures to limit access and reduce risk of exposure.

3. Design for equity and local context

Our models are trained on data that reflects the context in which they are deployed to ensure the relevance and accessibility of information mothers receive. We work towards embedded local context – whether language or local climate data – in the models we use so that the We see it as a responsibility to advance AI in areas left behind, and where health access and quality is typically poorer, and for AI, that means investing in and building datasets that reflect the communities we work with. Our baseline for AI design in these contexts begins with the most vulnerable, and it also means that we ensure that those populations have an understanding of the technologies they are exposed to.

4. Keep humans-in-the-loop

Given the sensitivity of information our models process, we rely on trained professionals to check and vet AI-generated responses before they are sent to mothers, and personally connect with high risk cases. We proactively seek, and continually fine-tune our models, with user feedback, and use human reviewers to train the model to understand safety nuances in different contexts, and establish guardrails.

5. Share with community

From the outset, we’ve built on open source tools and technology, like RapidPro and XLM-RoBERTa. As we expand on our capacity in AI, we feel it a responsibility to open source our own tested models, like UlizaLlama, to expand AI-driven support for underserved populations at a significantly lower cost. Our vision is that other implementers will be able to ‘plug and play’ our models within new contexts and languages, without having to build a new model or vast training data set.

6. Focus on sustainability

Our aim is to make our models as affordable, and cost neutral as possible so that they can be scaled and sustained in constrained government health systems. We capitalize on existing high budget AI technologies and fine-tune them for our context and use case – for a fraction of the cost and team size. In turn, we open source our customized, pre-tested models, so that other organizations can cheaply and quickly import them into their own services and systems.

Share this resource