What We Learned Running Inclusive AI Innovation Sprints in Africa (And What We’d Do Differently Next Time)

Global Disability Innovation Hub
March 31, 2026
Kenya, Ghana, Uganda
Case Studies and Reports

A question ran through all of it: what factors shape sustained outcomes from innovation sprints in inclusive AI, and how do these programs strengthen inclusive technology ecosystems over time? This blog is our attempt to answer it honestly. 

Over the past year, we’ve been running a series of innovation sprints focused on inclusive AI for speech technologies in Africa — first in Ghana, then in Kenya, and now beginning in Uganda. 

The goal was simple to state, but harder to achieve: to support local innovators to build meaningful solutions for people living with speech impairments, using AI and speech technologies that actually reflect local languages, contexts, and lived realities. 

What we learned along the way wasn’t just about models, datasets, or apps. It was about how innovation actually unfolds when you put real people, real constraints, and real expectations into the room. 

 Approach 

This reflection draws on the design and delivery of innovation sprints conducted in Ghana and Kenya, focused on inclusive AI and speech technologies. The programs brought together technologists, researchers, and people living with speech impairments to work collaboratively over structured sprint cycles lasting several months. 

Observations are based on program implementation, participant engagement, and follow-up interactions with teams and partners. Rather than evaluating success solely through short-term outputs such as the number of technologies developed, this reflection considers broader indicators including capability development, user engagement, partnership formation, and emerging career and learning pathways. 

Starting with Ghana: momentum, motivation, and limits 

In Ghana, the sprint was intentionally focused on application development. Participants were enthusiastic, technically capable, and deeply motivated by the social impact of the problem space. The energy was real. The demo day looked good. One team went on to form a company and continues to build in this space. 

The program was implemented through a single academic partner, who played both hosting and delivery roles. This provided strong technical grounding and coordination, but also meant that community access, lived-experience engagement, and practitioner networks had to be built largely within the sprint itself. 

Over the months, something else became clear: many teams struggled to move beyond early concepts. User engagement was often limited or intermittent. Solutions were thoughtful, but largely incremental. After the sprint ended, most participants gradually disengaged. 

This wasn’t because people didn’t care or lacked talent. It reflected something more structural: building inclusive technology is harder than it looks when sustained community connection and real-world service contexts are not already embedded into the delivery model. 

Ghana gave us momentum — and it also gave us a baseline.  

Kenya: slowing down to go deeper 

By the time we reached Kenya, we carried those lessons with us. One key change was the delivery structure itself. 

In Kenya, the sprint was implemented through a partnership model: 

  • a community-based implementing partner with deep understanding of disability, strong trust within the disability community, and established relationships with professionals and service providers, and  
  • a separate academic partner providing research depth, institutional support, and learning continuity.  

This division of roles mattered. 

Alongside this, the sprint design changed in another important way: we stopped assuming everyone wanted to build an app. 

Participants explicitly told us they were more interested in: 

  • research questions  
  • improving models  
  • understanding ethical, linguistic, and safeguarding dimensions of speech technology  

So instead of forcing a single output, we introduced three parallel tracks: research, model development, and applications. 

This changed the texture of the work. 

There was less rush to demo something polished. More time was spent interrogating assumptions, sitting with uncomfortable constraints, and involving people living with speech impairments early in shaping what should be built — or whether something should be built at all. 

The outcomes weren’t polished. But they were more grounded. 

A deliberate shift in program design 

A key design choice in Kenya was to create structured opportunities for people living with speech impairments to engage candidly with teams and mentors. Dedicated sessions were held where participants could freely voice their feedback and challenge ideas without pressure to settle for partial solutions. 

This approach helped shift the dynamic from validation to accountability and reinforced the principle that co-creation is not a one-time consultation, but an ongoing dialogue that shapes the direction and quality of solutions. 

The uncomfortable truth about “user involvement” 

One of the most common phrases in inclusive innovation is “put users at the centre.” What we learned is that this phrase hides more than it reveals. 

In both Ghana and Kenya, people living with speech impairments were involved. The difference wasn’t whether they were present — it was when and how their input shaped decisions. 

When lived experience enters only as validation at the end, it tends to confirm what teams already want to build. 

When it enters early — supported by trusted community partners and embedded into the program structure — it changes direction, scope, and sometimes ambition. 

That shift is uncomfortable. It slows things down. But it produces work that holds up better outside the sprint environment.  

Innovation sprints are not neutral containers 

Another lesson surprised us: innovation sprints themselves are not neutral tools. Their structure quietly shapes outcomes. 

  • Short timelines privilege confident builders over careful learners  
  • Single-track programs flatten different kinds of contribution  
  • Demo-day incentives reward optics over durability  

Once we acknowledged this, we stopped asking: 

“How do we get better outputs?” 

and started asking: 

“What kinds of learning are we actually enabling?” 

That question led us to design sprints differently — with midpoints for recalibration, space for research, and delivery partnerships that combine academic strength with community-rooted implementation.  

Emerging Indicators of Impact 

While traditional innovation programs often measure success through the number of technologies developed or startups created, the outcomes observed in these sprints suggest a broader set of indicators. 

Early signals of impact have included participants pursuing internships, employment, or further study related to inclusive technology, as well as continued collaboration with disability organizations and research institutions. 

For example, in the Kenya cohort, two participants secured internships at Strathmore University, with one subsequently employed by iLab at Strathmore. 

These trajectories illustrate how the influence of the program can extend beyond the sprint itself, shaping professional pathways and strengthening the foundations of inclusive technology ecosystems over time.  

So what are innovation sprints really good for? 

After Ghana and Kenya, our answer is more modest — and more honest — than where we started. 

Innovation sprints are good at: 

  • building local capability  
  • surfacing real constraints early  
  • creating shared language between technologists and communities  
  • strengthening collaboration across sectors  
  • helping a small number of ideas find the conditions they need to grow  

They are not factories for startups. 
They are not guarantees of scale. 
And that’s okay. 

What matters is whether they leave behind stronger people, better questions, and clearer pathways than before.  

Implications for Program Design and Measurement 

The experience from these innovation sprints suggests that success should be defined not only by immediate outputs, but also by the conditions created for sustained engagement and growth. 

Innovation programs in inclusive technology may therefore benefit from tracking a combination of short-term outputs and longer-term indicators, including capability development, collaboration networks, and participant career progression. 

By adopting a broader view of impact, funders and program designers can better understand how initiatives like these contribute to building resilient and inclusive technology ecosystems that evolve over time. 

Closing reflection 

As we move into Uganda — and plan future sprints elsewhere — we’re holding onto that humility. The work is slower than hype suggests. But when it’s grounded, it lasts longer. 

And for people living with speech impairments, that durability matters far more than a perfect demo. 

That durability — in people, in partnerships, in questions worth pursuing — is how inclusive technology ecosystems are actually strengthened over time.