Innovation Without Inclusion Is Failure: Rethinking How We Fund Global Health

For more than 20 years, I’ve had the privilege, and responsibility, of working across over 100 countries, supporting and implementing more than 500 international development programmes. I’ve worked with governments, multilaterals, NGOs, researchers, and local organisations across the Global South, designing programmes in health, governance, sustainability, workforce development, and systems strengthening.

If there is one lesson that experience has reinforced again and again, it is this:

Innovation without inclusion does not fail accidentally; it fails by design.

Breakthroughs Don’t Automatically Translate Into Impact

Global health funding is often framed around breakthroughs: new technologies, new models, new discoveries. And to be clear; science matters. Research matters. Innovation matters.

But what I have seen repeatedly, from rural health facilities to national policy tables, is that breakthroughs alone do not improve lives.

Too often, funding systems reward novelty over relevance, speed over sustainability, and outputs over outcomes. Decisions about what problems matter, and how they should be solved, are frequently made far from the communities most affected by them.

The result is a familiar pattern:

  • Well-funded innovations that never reach scale
  • Pilots that end when funding cycles close
  • Technologies that exist, but are not trusted
  • Systems that look efficient on paper but feel alien on the ground

This isn’t a failure of science. It’s a failure of inclusion.

Communities Are Not Beneficiaries; They Are Co-Designers

Throughout my career, my work has been consistently centred on one principle: communities are not passive beneficiaries of development; they are active partners in it.

Across the programmes we designed and delivered, inclusion was not an add-on or a compliance requirement. It was embedded into how we worked:

  • Participatory programme design from the outset
  • Community-led needs assessments and solution testing
  • Beneficiary feedback and complaints mechanisms built into delivery
  • Safeguarding not as policy, but as practice
  • Continuous learning loops between communities, implementers, and policymakers

When communities are treated as subjects, programmes may deliver outputs. When communities are treated as partners, programmes deliver impact.

In global health, lived experience is not “soft data.” It is intelligence.

Funding Models Shape Power; And Power Shapes Outcomes

Funding is never neutral. Who controls resources controls priorities, timelines, and definitions of success.

Over the years, I’ve seen how traditional funding models can unintentionally reinforce exclusion:

  • Short funding cycles that discourage long-term trust-building
  • Rigid logframes that leave little room for community adaptation
  • Reporting systems designed for donors, not for learning
  • Accountability flowing upward, but rarely downward to communities

When funding structures do not create space for local voice, they silence it.

And when community feedback is absent from decision-making, we end up optimising for systems; not people.

Inclusion Is a Governance Issue, Not a Technical One

Inclusive global health is often framed as a technical challenge: better data, better tools, better indicators.

In reality, it is a governance challenge.

True inclusion requires:

  • Shared decision-making power, not just consultation
  • Transparency about how priorities are set
  • Accountability mechanisms that communities can actually use
  • Willingness to fund capacity, not just delivery
  • Trust in local institutions, even when it feels uncomfortable

In the programmes I’ve supported, the most sustainable outcomes emerged when governments, communities, and implementers were aligned—not because they were forced to be, but because systems were designed to support collaboration.

Sustainability Is Built Through Ownership

Sustainability does not come from perfect project design. It comes from ownership.

When communities see themselves reflected in programmes—when their knowledge shapes solutions, when feedback leads to change, when safeguarding is real and visible—systems last beyond funding cycles.

I have seen health initiatives continue years after donors exited, not because they were expensive or technologically advanced, but because communities believed in them and governments trusted them.

That is what inclusion makes possible.

Rethinking the Future of Global Health Funding

If we are serious about impact, global health funding must evolve.

That means:

  • Funding inclusion as infrastructure, not overhead
  • Valuing lived experience alongside scientific expertise
  • Designing accountability systems that serve communities, not just funders
  • Supporting long-term partnerships instead of short-term projects
  • Measuring success by trust, access, and equity—not just scale

At InclusiveAIHub, this thinking shapes how we approach innovation, governance, and systems design. Whether working with technology, data, or policy, our starting point remains the same: people first, systems second.

A Final Reflection

After two decades in international development, I no longer believe the biggest challenge in global health is a lack of innovation.

It is a lack of intentional inclusion.

Innovation that does not shift power will not shift outcomes. Funding that does not centre communities will not create equity. And systems built without trust will always struggle to deliver impact.

Inclusion is not a moral add-on to innovation. It is the condition that makes innovation work.

GEORGE GOPAL OKELLO Programmes Director, #InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


Short-Term Gains, Long-Term Harm: A Development Lens on AI and Global Shifts

In my two decades working in international development; coordinating over 500 programmes in more than 100 countries, I’ve learned that human progress is never linear. Every era brings powerful innovations, followed by profound questions about equity, power, and who benefits from change.

This lesson now echoes with new urgency as the world experiences not just technological disruption through artificial intelligence (AI) but also tectonic shifts in global geopolitics, where the priorities of states and funders are being rebalanced away from international cooperation and social investment toward competitive and military positioning. The consequences of this twin transformation; technological and geopolitical – will shape the opportunities and vulnerabilities of the poor, marginalised, and overlooked far more than most policymakers realise.

The Development Lessons We Forget Too Soon

When I began my career, the international development sector was anchored around shared principles: capacity building, equitable systems, and long-term sustainable impact. Development successes were measured not by quick outputs but by whether communities could sustain progress long after external support ended.

I remember leading health systems strengthening initiatives across East Africa, Middle East and South East Asia. Some programmes produced impressive short-term metrics – faster reporting times, streamlined logistics, better dashboards, and yet, a few years after donor funding concluded, critical systems faltered. Clinics struggled to maintain quality services, boards lacked the governance capacity to steer adaptations, and communities saw gains slip away.

What distinguished the programmes that lasted wasn’t flashy reporting or compliance checkboxes; it was whether local actors had ownership, governance structures were embedded, and there was investment in people, not just processes. Sustainability was never about speed; it was about resilience.

An AI “Tsunami” Meets a Shifting World Order

Today, the world stands at an inflection point not unlike those in development history, but the scale and speed are unprecedented.

At the 2026 World Economic Forum in Davos, the head of the International Monetary Fund warned that AI is hitting labour markets like a “tsunami,” with around 60% of jobs in advanced economies and 40% globally expected to be transformed, enhanced, or displaced by AI technologies in the coming years. This is not speculative; it is grounded in emerging research and early labour market disruptions, particularly for entry-level jobs and younger workers who are often the first to feel displacement effects.

What is less often discussed, however, is how these technological shifts intersect with wider geopolitical trends. As tensions rise between major powers, government spending is increasingly prioritised toward defence and strategic competition. In several contexts, international development aid budgets have stagnated or been reshaped toward security-related objectives, not broad social investment in health, education, and community resilience.

This reallocation of global resources compounds the risks of AI disruption, especially for those who have historically lacked power, voice, and access to decision-making processes.

What This Means for the Poor and Vulnerable

AI’s labour market impact is uneven. Even in wealthier countries, entry-level roles, a traditional stepping stone for youth and lower-skilled workers, are at risk of automation, making it harder for new graduates and marginalised populations to enter the workforce.

In low-income countries, the dynamics are different but no less perilous. Many of these economies already operate with fragile education systems, limited digital infrastructure, and minimal social protection mechanisms. If AI changes labour demand in advanced economies faster than capacity can be built elsewhere, we risk deepening global inequality, not just within countries, but between them.

Moreover, international development itself is an ecosystem that supports economic participation and social cohesion. As funding streams shrink or shift focus, the budgets and programs that might help vulnerable populations adapt, through skills development, inclusive social protection, and equitable technology access – are jeopardised.

Whose Voices Are Heard, and Whose Are Not?

One of the sharpest lessons from my 20+ years of programme delivery is this: the people closest to the problem rarely shape the solutions.

Yet those who are most affected; informal workers in rural regions, young people without access to formal education, people living with disabilities, migrants outside mainstream systems – are the very communities most likely to be marginalised by both AI disruption and shifting global priorities.

AI conversations at elite global forums often centre on macro trends and high-level policy. Rarely do they include the lived experiences of those who lack AI platforms, digital literacy, or secure employment to begin with. Their voices are not merely absent, they are structurally excluded from the mechanisms shaping decisions that will affect their futures.

What International Development Can Teach the AI Era

If there is a message from development practice that the world now needs more than ever, it is this:

Sustainable change requires careful governance, inclusive participation, local ownership, and resilient social systems.

In international development, we learned that:

  • Quick wins without community ownership don’t last.
  • Services that don’t embed accountability collapse.
  • Structural inequalities remain unless explicitly addressed.

The same principles apply to AI and the geopolitical context of the emerging global order. Fast adoption of AI without inclusive governance risks widening disparities. Investments that favour power projection over social welfare undermine the very foundations needed for equitable participation in the future economy.

A Call to Redefine What Progress Means

As AI reshapes economic landscapes, and as geopolitical priorities shift resources away from social investment, we need a broader definition of progress – one that values inclusive opportunity and equitable resilience as much as innovation and competitiveness.

For the poor and marginalised, AI will not be a neutral force. Without governance frameworks that prioritise their inclusion, their voices in policymaking, and their access to skills and resources, AI will be another mechanism that reinforces exclusion rather than dismantling it.

We have seen similar patterns before in development. Short-term gains can mask long-term harm if systems are not built with people at the centre, not just at the periphery. The decisions we make now – about AI, geopolitics, and investment priorities – will determine whether the global future is one of shared prosperity or widened inequality.

Allowing AI to develop without attention to inclusion, governance, and equity is not just a technological oversight. It is a choice about whose futures matter – and whose do not.

GEORGE GOPAL OKELLO Programmes Director, #InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


Powered by WordPress.com.

Up ↑