Short-Term Gains, Long-Term Harm: A Development Lens on AI Sustainability

After more than 20 years working in international development, coordinating and implementing over 500 programmes across more than 50 countries, I have learned one uncomfortable truth: short-term success can be dangerously misleading.

I have seen programmes celebrated in donor reports, applauded at conferences, and showcased as models of “impact”, only to quietly collapse a few years later. Clinics closed. Systems reverted. Communities were left no better off than before, sometimes worse.

“What looks like success in year one can become harm by year five if sustainability is not designed in from the start.”

Today, as artificial intelligence is rapidly embedded into development, health systems, humanitarian response, and governance, I am struck by how familiar this pattern feels. The technology may be new, but the risks are not.

Lessons from the Field: When Impact Didn’t Last

In my earlier years, I worked on large-scale health systems strengthening programmes across Europe, Middle East, South East Asia, Latin America and Africa. Many of these initiatives achieved impressive early results:

  • Increased service delivery
  • Improved reporting metrics
  • Faster operational performance

But over time, a pattern emerged.

Programmes that prioritised speed, visibility, and short-term outputs; often to satisfy funding cycles, struggled to survive once external support ended. In contrast, programmes that invested in governance, local capacity, accountability, and institutional ownership endured.

“Sustainability was never about how fast we moved. It was about whether the system could stand without us.”

This distinction is critical, and deeply relevant to how AI is being deployed today.

AI’s Familiar Mistake: Optimising for the Present

AI systems are often praised for their ability to deliver:

  • Faster decisions
  • Lower costs
  • Predictive insights
  • Operational efficiency

In development and humanitarian contexts, this can mean quicker needs assessments, automated targeting of beneficiaries, or streamlined logistics.

But efficiency alone does not equal sustainability.

When AI systems are optimised primarily for short-term performance, they risk:

  • Reinforcing existing inequalities
  • Excluding communities with weak data infrastructure
  • Prioritising visibility over vulnerability
  • Making decisions without meaningful human accountability

“In development, we learned that what gets measured gets funded, and what gets funded gets prioritised. AI simply accelerates this bias.”

When Systems Reward the Wrong Outcomes

I recall programmes that consistently underperformed, not because the need was lower, but because the data systems were weaker. They served remote, marginalised populations, where reporting was manual, connectivity unreliable, and governance structures under-resourced.

Meanwhile, programmes with sophisticated systems and polished dashboards attracted more funding, regardless of comparative need.

Now imagine AI trained on those same historical patterns.

AI does not merely reflect reality; it amplifies it.

“When AI interprets ‘less data’ as ‘less need,’ it quietly locks exclusion into the system.”

This is how short-term optimisation becomes long-term harm.

Sustainability Is a Governance Question, Not a Technical One

Over two decades, I learned that failing programmes rarely collapsed due to lack of innovation. They collapsed due to weak governance:

  • Unclear accountability
  • Poor oversight
  • Lack of ethical leadership
  • Minimal community ownership

AI faces the same risk.

No amount of technical sophistication can compensate for:

  • Absence of ethical guardrails
  • Tokenistic stakeholder engagement
  • Compliance without accountability
  • Innovation without responsibility

“Technology can scale solutions, but it can also scale neglect.”

Reframing AI Through a Development Lens

Development practice has taught us that sustainability requires:

  • Long-term thinking beyond funding cycles
  • Investment in institutions, not just tools
  • Governance embedded from the start
  • Accountability to those most affected

AI systems must be treated the same way.

This means asking harder questions:

  • Who benefits today, and who bears the cost tomorrow?
  • Who governs these systems when things go wrong?
  • Whose voices are missing from design and deployment?

“If AI is to serve development, it must inherit development’s hardest lessons-not repeat its oldest mistakes.”

A Final Reflection

Short-term gains are seductive. They always have been. They produce headlines, dashboards, and applause.

But after 20 years in the field, I have learned to look beyond the immediate results and ask a simpler question:

Will this still work when we are no longer there?

If AI cannot answer that question; ethically, inclusively, and sustainably, then it risks becoming yet another well-intentioned intervention with unintended consequences.

“Sustainability is not an outcome. It is a design choice.”

And that choice must be made now.

Editorial Note – InclusiveAIHub

“AI sustainability cannot be achieved through speed and scale alone. It requires governance, inclusion, and long-term accountability—lessons international development learned the hard way.”George Gopal Okello, Programmes Director, InclusiveAIHub

GEORGE GOPAL OKELLO Programmes Director, #InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


Leave a Reply

Powered by WordPress.com.

Up ↑

Discover more from InclusiveAIHub

Subscribe now to keep reading and get access to the full archive.

Continue reading