Publications

Ethics Isn’t What You Say—It’s What Your Systems Reveal!

Over the past 20 years, I’ve worked across more than 100 countries implementing international development programmes, supporting governments, NGOs, and institutions to strengthen systems in health, governance, and organisational delivery. Across that journey, one lesson has remained constant:

Ethics is not defined by what organisations say.
It is defined by what their systems do, especially when no one is watching.


The Gap Between Policy and Practice

Early in my career, I was deeply involved in coordinating audit and compliance processes across organisational systems, reviewing user access controls, password management, incident response procedures, and data governance practices across ERP platforms, HR systems, finance tools, and cloud environments.

On paper, most organisations looked strong:

  • Policies were documented
  • Compliance frameworks were in place
  • Codes of conduct were clearly written

But when we began conducting risk assessments and reviewing data flows, testing access permissions, analysing decision-making processes, we often uncovered gaps:

  • Staff with unnecessary access to sensitive data
  • Weak enforcement of information governance policies
  • Incident response processes that existed, but were not operationalised
  • Data handling practices that did not fully meet GDPR standards

In other words, ethics had been declared, but not always proven.


Ethics Lives in Systems, Not Statements

As I moved into international development and health systems strengthening, the stakes became even higher.

Ethics was no longer just about compliance, it was about people’s lives, dignity, and access to opportunity.

In many of the programmes I supported across the Global South, we made a deliberate choice to embed ethics into the architecture of delivery:

  • Beneficiary feedback mechanisms that allowed communities to raise concerns safely
  • Complaints and redress systems that were accessible, confidential, and responsive
  • Safeguarding protocols integrated into programme operations—not treated as standalone policies
  • Participatory approaches ensuring communities were not passive recipients, but active partners

These were not symbolic gestures. They were operational systems designed to hold organisations accountable to the people they serve.

Because without feedback loops, there is no accountability.
And without accountability, ethics cannot be demonstrated.


Governance Is Where Ethics Becomes Real

My work in audit, compliance, and information governance, particularly around GDPR implementation, reinforced a critical point:

Ethics becomes real when it is governed.

This means:

  • Clear ownership of decisions
  • Transparent data handling practices
  • Defined escalation pathways when things go wrong
  • Continuous monitoring of risks and vulnerabilities
  • Documented processes that can withstand external scrutiny

I’ve supported organisations in reviewing everything from finance systems and databases to HR recruitment platforms and identifying vulnerabilities, recommending corrective actions, and strengthening controls to ensure confidentiality, integrity, and availability of information.

But beyond the technical controls, the real question was always:

Can this system be trusted by the people it affects?


Inclusion Is the Test of Ethical Systems

Inclusion has been a central thread throughout my work.

Across hundreds of programmes, we saw that systems often fail not because they are inefficient, but because they are designed without the voices of those most affected.

When inclusion is missing:

  • Feedback mechanisms are underused or inaccessible
  • Complaints go unheard or unresolved
  • Data fails to reflect lived realities
  • Decisions unintentionally exclude vulnerable groups

When inclusion is embedded:

  • Systems become more responsive
  • Trust increases
  • Risks are identified earlier
  • Outcomes improve sustainably

In this sense, inclusion is not separate from ethics, it is the evidence of it.


The Illusion of Ethical Compliance

Today, many organisations speak confidently about ethics, whether in AI, data governance, or service delivery.

But there is a growing risk of what I would call ethical illusion:

  • Policies that exist but are not implemented
  • Frameworks that are designed but not tested
  • Commitments that are communicated but not measured

From my experience, the organisations that truly uphold ethics are not those with the most polished statements—but those with the most robust systems of accountability.

They invest in:

  • Regular audits
  • Independent reviews
  • Continuous risk assessments
  • Strong information governance
  • Mechanisms that allow people to challenge decisions

Because they understand that ethics must be demonstrated, not assumed.


From Compliance to Trust

Whether working on GDPR implementation, strengthening health systems, or designing inclusive programmes, my focus has always been on moving organisations beyond compliance and towards trust.

Compliance answers the question: Are we following the rules?
Ethics answers the question: Are we doing what is right?

And trust is built when the answer to both is consistently yes.


In every system I have worked on, from information platforms to global development programmes, the same truth applies:

Ethics is not a declaration.
It is a practice.
It is a system.
It is a set of decisions made visible through action.

The real test of any organisation is not what it says about ethics, but whether its systems can prove it.

Because in the end, ethics is not what you publish.
It is what people experience.

GEORGE GOPAL OKELLO Programmes Director, InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


When the Tide Goes Out: What the EU AI Act Will Reveal About Your Hiring & Case Management Systems

Only when the tide goes out do you discover who’s been swimming naked.Warren Buffett

For over two decades, I have worked across more than 100 countries implementing international development and health systems strengthening programmes—supporting governments, NGOs, and private sector partners to design governance frameworks, strengthen information systems, and build resilient organisational infrastructure.

Across these engagements—whether in public health delivery systems in East Africa, HR recruitment platforms for multinational development agencies, or ERP and case management tools for social impact organisations—I’ve often seen a familiar pattern:

When systems are performing efficiently, nobody asks difficult questions.

When services are being delivered on time, few pause to interrogate how decisions are being made.

And when recruitment pipelines appear to be functioning smoothly, almost no one asks whether fairness, accountability, or inclusion are actually embedded into the systems doing the screening.

In calm waters, risk hides easily.

But regulation is the tide.

And the EU Artificial Intelligence Act, which will come into near-full application on 2 August 2026, may be the most significant governance tide the AI ecosystem has experienced to date—particularly when read alongside the General Data Protection Regulation (GDPR).


The Systems That “Worked”—Until They Didn’t

Earlier in my career, I led and supported internal audits across organisational information systems—reviewing user access controls, password management procedures, incident response protocols, and vendor integrations across:

  • HR recruitment systems
  • Finance and procurement platforms
  • CRM and case management tools
  • Cloud-based collaboration environments

We conducted risk assessments across operational workflows—validating invoices, reviewing database permissions, analysing sales order processes, and identifying vulnerabilities across vendor-managed environments.

Many of these systems had been in place for years.

They were familiar. Trusted. Efficient.

But when examined through the lens of GDPR compliance—particularly around:

  • Data minimisation
  • Automated decision-making
  • Access governance
  • Transparency
  • Auditability

—we discovered gaps that had remained invisible during periods of organisational growth and stability.

Policies existed, but enforcement was inconsistent.

Documentation was partial.

Decision-making logic—especially in automated workflows—was not explainable.

In some cases, system administrators could not confidently articulate how individuals had been filtered out of service eligibility pipelines or talent recruitment shortlists.

Efficiency had masked opacity.


AI Recruitment Tools: The New Black Box

Today, many organisations—particularly in social impact, healthcare delivery, and public service provision—are deploying AI-powered vendors for:

  • CV screening
  • Candidate ranking
  • Case prioritisation
  • Service eligibility scoring
  • Risk prediction models

Often with limited:

  • Ethical oversight
  • Inclusion testing
  • Documentation of training data
  • Algorithmic impact assessments
  • Vendor transparency clauses

From a programme delivery perspective, these tools promise scale.

But from a governance perspective, they introduce:

  • Automated discrimination risk
  • Non-compliance with data protection law
  • Unexplainable decision pathways
  • Reputational vulnerability
  • Legal liability under emerging AI regulations

Under the EU AI Act, AI systems used in employment, recruitment, and access to essential services will be formally classified as high-risk systems.

This classification will trigger obligations including:

  • Risk management frameworks
  • Human oversight mechanisms
  • Technical documentation
  • Bias monitoring
  • Data governance standards
  • Incident reporting protocols

Organisations unable to demonstrate compliance may face regulatory scrutiny—not only for what their AI systems do, but for how they were procured, configured, and governed.


Procurement Without Governance Is Exposure

In several health systems strengthening initiatives I’ve supported, procurement processes for digital HR platforms or patient case management tools focused heavily on:

  • Cost efficiency
  • Implementation timelines
  • Vendor reputation

Rarely did Requests for Proposals require vendors to provide:

  • Algorithmic fairness audits
  • Data lineage documentation
  • Impact assessment reports
  • Inclusion testing protocols
  • Safeguarding risk analyses

In effect, organisations outsourced decision-making infrastructure without retaining meaningful governance authority.

This becomes especially concerning when AI systems are used to:

  • Screen job applicants
  • Prioritise patients for intervention
  • Allocate financial support
  • Flag safeguarding risks
  • Predict workforce attrition

Without robust information governance policies, incident response procedures, and audit trails, organisations may find themselves unable to defend automated decisions affecting vulnerable populations.


2 August 2026: When the Tide Recedes

The EU AI Act’s full application deadline will not introduce risk into your systems.

It will reveal it.

Just as GDPR exposed weaknesses in data handling practices that had gone unexamined for years, the AI Act will test whether organisations:

  • Conducted algorithmic risk assessments
  • Documented system decision logic
  • Maintained human oversight
  • Evaluated bias across demographic groups
  • Established complaints and redress mechanisms
  • Embedded safeguarding considerations

In development contexts—where recruitment and service allocation decisions can materially affect livelihoods, health outcomes, and access to opportunity—the stakes are particularly high.


From Compliance to Systems Strengthening

Across the programmes I have supported in the Global South, we consistently embedded:

  • Participatory feedback mechanisms
  • Beneficiary complaints pathways
  • Safeguarding protocols
  • Data protection policies
  • Incident response procedures

not as administrative afterthoughts—but as core design principles.

The same philosophy must now apply to AI systems.

Governance is not friction.

It is resilience.


Final Reflection

For organisations adopting AI in hiring or service delivery environments, the coming regulatory cycle offers a moment of truth.

The systems that seemed to function seamlessly may soon be asked to explain themselves—to auditors, regulators, or affected communities.

And when that moment arrives, the question will not be whether your AI worked efficiently.

It will be whether it worked fairly, transparently, and accountably.

Because when the tide goes out, efficiency is not what protects you.

Governance is.

GEORGE GOPAL OKELLO Programmes Director, InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


AI Bias Audit Framework for Hiring Systems: Lessons from Two Decades of Ensuring Fairness and Compliance

AI Bias Audit Framework for Hiring Systems: Lessons from Two Decades of Ensuring Fairness and Compliance

Over the past 20 years, I’ve had the privilege of implementing over 500 international development programmes in more than 100 countries, spanning governance, health systems strengthening, sustainability, and HR system management. Throughout this journey, one principle has guided my work: systems only succeed when they are designed with accountability, inclusion, and trust at their core.

Early in my career, I coordinated audits and compliance across ERP, CRM, cloud platforms, finance systems, and collaboration tools. I led independent audits of business transactions, examining information security controls, data protection practices, and system access management against GDPR, ISO standards, and internal policies. I conducted risk assessments across finance, operations, and cloud infrastructure, validated sales orders and invoices, reviewed data handling processes, and identified vulnerabilities and always recommending corrective actions.

What I quickly realized is that technology alone is never neutral. Whether it’s a cloud platform, HR system, or AI hiring tool, the rules embedded within these systems reflect the assumptions, priorities, and biases of their designers. If unchecked, these biases can propagate at scale.

This lesson became particularly critical when I started engaging with recruitment and HR systems in international development contexts. Hiring platforms, applicant tracking systems, and AI-driven recruitment tools promised efficiency, but they also introduced risks: subtle biases against women, people from marginalised communities, or candidates with non-traditional career paths. The tools we implemented had to be compliant, transparent, and fair, not just fast or convenient.

Building an AI Bias Audit Framework

Drawing from my experience in audit, compliance, and international programme implementation, I’ve approached AI hiring systems the way I would any critical governance system:

  1. Understand the Risk Landscape
    Before reviewing a system, I map potential points of bias. Who inputs data? Who interprets results? Which communities are underrepresented in historical hiring records? For ERP and CRM audits, this was equivalent to tracing user access controls and transaction workflows to spot vulnerabilities. For AI, it means understanding how models could reproduce systemic inequities.
  2. Examine Data Handling
    Just as I’ve audited finance and operational databases to ensure confidentiality, integrity, and availability, AI bias audits require careful scrutiny of training datasets. Are historical records reflecting fair representation? Are credentials or metrics inadvertently privileging certain groups over others?
  3. Assess Algorithmic Decisions
    In ERP or cloud audits, I tested whether processes enforced internal policies and governance standards. In AI hiring, I simulate candidate scenarios, analyze outputs, and measure disparities across demographics. The goal is not to reject automation but to ensure it augments human judgment without harming equity.
  4. Embed Human Oversight and Governance
    Across health systems strengthening and programme implementation, I’ve seen that technology works best when paired with strong governance. AI hiring systems require clear accountability: who monitors outcomes, who responds to flagged biases, and how candidates can contest decisions. This is analogous to incident response procedures I coordinated for enterprise systems: defined escalation pathways, service level agreements with vendors, and continuous monitoring.
  5. Iterate with Inclusion at the Core
    Finally, just as I embedded participatory approaches, beneficiary feedback, and safeguarding in development programmes, AI audits must include input from the very communities the technology affects. Inclusive design is not optional; it is the safeguard against systemic bias.

Why This Matters

Unchecked AI bias is not hypothetical. It can silently exclude talented individuals, reinforce inequities, and undermine trust — just as weak controls in finance or cloud systems can lead to operational failures or data breaches. My combined experience in compliance, risk management, and inclusive programme design has reinforced a simple truth: technology is only as ethical and effective as the governance frameworks around it.

By applying rigorous audit principles, embedding accountability, and centering inclusion, organisations can transform AI hiring tools from opaque, biased systems into engines of fair opportunity.

Final Thought

AI offers incredible potential to improve hiring and talent management, but only if we audit, govern, and humanise these systems. From my early days reviewing password management and incident response procedures, to leading global programmes with communities at their centre, the lesson is clear: innovation without inclusion and oversight is not progress — it is failure.

If we are to build AI systems that truly serve everyone, we must combine technical rigour with the human-centred principles I’ve carried through every project in the Global South: transparency, accountability, and equity.

GEORGE GOPAL OKELLO Programmes Director, InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


Beyond CVs: How Employers Can Leverage AI to Spot Hidden Talent

Not too long ago, when computers were still rare, job applications were handwritten on paper and physically posted to companies. Recruiters would spend hours manually sorting through envelopes, reviewing each letter, and shortlisting candidates for interviews.

Then came the arrival of computers. Jobseekers began typing out and printing their applications – a small revolution at the time. But not everyone welcomed this change. Some employers actually insisted that applications must be handwritten, while others penalised those who dared to use a typewriter or computer, claiming it lacked “personal effort.”

Fast forward a few years, and technology reshaped the process entirely. Online job applications became the norm; no paper, no postage stamps, just a few clicks. Today, most hiring processes are fully digital. The irony is that what was once frowned upon – using technology to apply for a job – is now the standard practice.

History Is Repeating Itself, Only This Time with AI

Now, we’re witnessing the same pattern again, but this time, it’s with artificial intelligence. AI tools like ChatGPT, résumé builders, and automated cover letter generators have made it easier than ever to apply for jobs. What once took hours of writing, editing, and formatting can now be done in minutes.

But instead of celebrating this progress, some employers are reacting the same way companies did decades ago when typed applications first appeared. They view AI-assisted applications as dishonest or “lazy.” Some even penalise candidates if their application appears “too polished,” assuming it was generated by AI.

The irony is striking. A few decades ago, applicants were penalised for poor grammar or spelling. Now, some are penalised for writing too perfectly, because that might mean they had help from an AI tool.

The Point Isn’t the Tool – It’s the Person Behind It

AI isn’t replacing a candidate’s intelligence or integrity. It’s a tool – just like a computer, spell-checker, or online form was in its time. The goal of AI is to make work easier, save time, and optimise effort.

Employers who focus solely on whether a candidate used AI to refine their application are missing the bigger picture. The real question should be:

  • Does the candidate have the right skills for the job?
  • Do they demonstrate interest, curiosity, and initiative?
  • Can they bring value and creativity to the organisation?

AI can help a candidate express themselves more clearly or structure their thoughts better, but it can’t fake genuine motivation or practical experience. A strong candidate remains strong, regardless of whether they used AI for grammar, formatting, or phrasing.

Employers Must Evolve with Technology – Not Resist It

History shows that resisting technology only delays progress. Just as handwritten applications gave way to typed ones, and paper-based hiring gave way to online recruitment, AI is the next logical evolution in how people apply for jobs.

Forward-thinking employers are already adapting. Instead of penalising AI-assisted applications, they’re leveraging AI themselves to:

  • Screen candidates fairly and efficiently
  • Eliminate bias in recruitment processes
  • Enhance candidate experience through faster communication and feedback
  • Focus more on interviews and assessments that reveal real skills and potential

AI, when used responsibly, doesn’t reduce the quality of hiring – it improves it. It allows HR teams to spend less time on repetitive tasks and more time on human judgment, empathy, and connection.

The Future of Recruitment Is Human–AI Collaboration

As AI becomes more deeply integrated into workplaces, employers will have to accept that AI assistance is no longer cheating – it’s smart working.

We don’t reject calculators for making arithmetic faster, or word processors for fixing spelling errors. Likewise, AI should be seen as a supportive tool that enhances human capability, not replaces it.

The best employers of the future will be those who know how to balance human insight with technological efficiency. They will assess candidates based on skills, adaptability, and passion – not on whether they wrote every word of their cover letter unaided.

Because at the end of the day, AI can help you write a great application, but it can’t do the job for you. Performance, creativity, and empathy – those remain uniquely human.

Embracing Change, Ethically

As AI continues to reshape recruitment, both employers and jobseekers must adapt responsibly. Transparency, fairness, and inclusion must remain at the core. AI should be used to remove barriers, not create new ones.

The sooner organisations embrace AI as a partner in progress, not a threat, the faster we can build a hiring ecosystem that values both technology and humanity.

Because whether handwritten, typed, or AI-assisted, the true measure of any application has always been the same: Can this person make a difference?

GEORGE GOPAL OKELLO Programmes Director, InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


AI for the Forgotten: Prioritising Neglected Diseases Through Ethics, Inclusion, and Global Collaboration

Summary Box – Key Takeaways

  • Neglected diseases continue to devastate vulnerable populations, yet they remain underfunded and under-researched.
  • AI offers unprecedented opportunities in drug discovery, diagnostics, outbreak prediction, and research equity—but only if applied ethically and inclusively.
  • Community participation, local capacity building, and equitable governance are essential to ensure AI serves those who need it most.
  • Inclusion is not a luxury—it is the foundation for global health justice.

Introduction: Seeing What Others Ignore

Over the last 20 years, I’ve had the privilege—and the challenge—of implementing more than 500 international development programmes across 100 countries. I’ve worked with ministries of health, NGOs, and local communities to strengthen health systems, build governance frameworks, support ESG initiatives, and empower the most vulnerable.

One lesson has remained constant: those who are left out of the conversation are often those who suffer the most. And nowhere is this more evident than in the world of neglected diseases—illnesses like Chagas, dengue, leishmaniasis, river blindness, and sleeping sickness.

These diseases don’t just harm individuals—they destroy livelihoods, perpetuate cycles of poverty, and disproportionately affect rural and low-income communities. And yet, for decades, they have been ignored by pharmaceutical research, global funding, and policy priorities.

Now, as AI reshapes global healthcare and biotechnology, the question is urgent: can this technology finally prioritise the forgotten?

The Structural Neglect Behind Neglected Diseases

Neglected tropical diseases exist at the intersection of inequality, poverty, and underinvestment. Sanitation is poor, health systems are under-resourced, and data is scarce. For decades, global health initiatives have struggled with this reality. Pharmaceutical companies often overlook these diseases because they are seen as commercially unprofitable, and researchers are hampered by gaps in funding and reliable data.

Over my career, I’ve seen how these structural barriers consistently limit impact. Communities are willing and able to participate, but the systems, tools, and governance frameworks have historically excluded them.

This is precisely where AI, if developed inclusively and ethically, can transform the landscape.

Where AI Can Make a Real Difference

Based on both my experience and emerging AI innovations, there are four areas where technology can meaningfully support neglected disease programmes:

1. Accelerating Drug Discovery AI algorithms can rapidly analyse chemical libraries, predicting which compounds may work against parasites or viruses. What once took years of lab work can now be done in weeks. For diseases like Chagas or schistosomiasis, where drug development has stagnated for decades, AI could finally bring hope to millions.

I’ve witnessed similar transformations in health programmes where technology has shortened timelines and amplified impact—but only when local expertise is involved from the start.

2. Transforming Diagnostics In rural and resource-limited settings, diagnostic access is a daily challenge. AI-powered image recognition tools can help frontline health workers detect parasites or infections using mobile microscopes or simple kits.

In my work training health workers across East Africa and South Asia, I’ve seen firsthand how empowering local teams with technology—not just directives from afar—dramatically improves outcomes.

3. Predicting and Preventing Outbreaks AI models can integrate environmental and epidemiological data—rainfall, temperature, population movement—to forecast disease outbreaks. In my programmes, early-warning systems have historically saved lives when interventions were timely. AI has the potential to do this at scale, if deployed with strong governance and local collaboration.

4. Enhancing Research Equity Data inequities have long slowed progress in neglected disease research. AI tools, especially open-source platforms, can democratise access to insights, enabling researchers in Nairobi, Dhaka, or Lima to model interventions and collaborate internationally. From my perspective, this represents a rare opportunity to shift power toward those who have lived experience and contextual knowledge.

Ethical Challenges: Avoiding Exploitation

However, AI is not a magic bullet. Without inclusive design and governance, it risks repeating historical patterns of inequity. I’ve witnessed countless programmes where data was extracted from communities, yet decision-making and benefits remained concentrated elsewhere.

To avoid this, local researchers, policymakers, and communities must be central to AI initiatives from day one. Ethical AI for neglected diseases is not only about algorithms—it is about shared governance, transparency, and benefit.

The Role of InclusiveAIHub and the Way Forward

At InclusiveAIHub, our guiding principle is simple: AI must serve equity, not just efficiency.

Drawing from decades of international programme experience, I know that transformative impact comes from inclusion, capacity building, and sustainable partnerships. Applied to neglected diseases, this means:

  • Investing in open, accessible data for researchers worldwide
  • Supporting AI capacity building in low- and middle-income countries
  • Embedding communities in governance and design processes
  • Aligning innovation with ESG, sustainability, and ethical frameworks

When these elements come together, AI moves from a tool of technological privilege to a force for global health justice.

Prioritising the Forgotten

Neglected diseases present a profound challenge—but also a unique opportunity. AI can help us reach where traditional approaches have failed. But technology alone will never be enough.

The real promise lies in inclusive systems, where innovation is guided by lived experience, governed ethically, and applied equitably. Only then can AI help ensure that no disease—and no community—is too small to matter.

From my experience on the frontlines of development work, the principle is clear: the communities we often overlook are the ones AI must prioritise if it is to be a force for good.

“AI for neglected diseases is not just about smarter algorithms—it’s about building fairer systems that put vulnerable communities first.”George Gopal Okello, Programmes Director, InclusiveAIHub

GEORGE GOPAL OKELLO Programmes Director, InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


From Profit to Purpose: Why AI Companies Must Lead With Ethics, Not Just Innovation

Introduction: When Innovation Loses Its Moral Compass

Over the past two decades, I’ve had the privilege—and the responsibility—of working on more than 500 international development programmes across over 100 countries. I’ve sat in boardrooms discussing corporate governance frameworks, walked alongside communities affected by extractive industries, and supported organisations trying to balance profitability with social responsibility, sustainability, and ESG commitments.

Across sectors, one lesson has remained constant:

When innovation runs faster than accountability, the most vulnerable always pay the price.

Today, as artificial intelligence reshapes economies, institutions, and everyday life, I’m seeing familiar patterns emerge—only this time, at unprecedented speed and scale.

AI is being celebrated as a breakthrough technology. But too often, it is being driven by the same old logic: growth first, ethics later.

And that should concern all of us.

The Profit Trap I’ve Seen Before

In international development, I’ve watched well-intentioned projects fail because they prioritised outputs over outcomes, efficiency over equity, and short-term wins over long-term sustainability.

The AI industry risks repeating this mistake.

Many AI companies are racing to capture markets, attract investment, and demonstrate scale. Success is measured in valuations, user growth, and processing power—not in social impact, fairness, or trust.

I’ve seen this dynamic before in industries that later faced reputational collapse: • Financial services before the global financial crisis • Extractives before ESG became unavoidable • Pharma before access and equity entered the conversation

AI now stands at a similar crossroads.

Profit-driven innovation without ethical guardrails doesn’t just create risk—it creates harm.

Ethics Cannot Be an Afterthought

In governance and ESG work, we learned long ago that ethics bolted on after the fact rarely work. Real responsibility must be designed into systems from the start.

Yet in AI, ethics is often treated as a side project:

  • A policy document written after deployment
  • An advisory board without real authority
  • A “responsible AI” slide buried in investor decks

True ethical leadership looks very different.

It means asking difficult questions before systems are built:

  • Who might be excluded by this model?
  • Whose data is missing—and why?
  • Who is accountable when harm occurs?
  • Can affected communities challenge outcomes?

Ethics isn’t a constraint on innovation. It’s what makes innovation sustainable.

Inclusion: Not Charity, But Strategic Intelligence

During my years supporting governance reforms and CSR initiatives, I learned that exclusion is expensive.

Projects that ignored community voices collapsed. Policies designed without lived experience failed in practice. Systems built for “average users” broke down at the margins—and then failed entirely.

AI is no different.

When algorithms are trained on narrow datasets, developed by homogenous teams, and deployed without local context, they replicate the same structural inequalities we claim technology will solve.

In contrast, inclusive systems:

  • Perform better
  • Scale more responsibly
  • Earn public trust
  • Reduce legal and reputational risk
  • Unlock new markets and use cases

Inclusion isn’t philanthropy. It’s good governance.

AI Companies Are No Longer Just Tech Firms

One of the biggest shifts I’ve witnessed over 20 years is how corporations have evolved into social actors—whether they acknowledge it or not.

AI companies now influence:

  • Who gets credit
  • Who gets hired
  • Who receives healthcare
  • Who is surveilled
  • Who is excluded from opportunity

That comes with responsibilities traditionally associated with public institutions:

  • Transparency
  • Accountability
  • Equity
  • Sustainability

If AI firms want the freedom to innovate, they must also accept the obligation to protect human dignity.

From Profit Metrics to Purpose Metrics

In ESG work, progress only accelerated when organisations stopped treating impact as “nice to have” and started measuring it alongside financial performance.

AI needs the same shift.

Imagine an industry where success is measured not only by speed and scale, but by:

  • Reduced bias
  • Increased access
  • Community trust
  • Environmental footprint
  • Long-term social value

That future is possible—but only if leadership chooses it.

Doing good with AI is not a cost. It’s the smartest long-term investment a company can make.

A Choice We Still Control

AI is not neutral. It reflects the incentives, values, and blind spots of those who build it.

After two decades working at the intersection of governance, sustainability, and social impact, I’m convinced of this:

If AI companies choose profit without principle, they will repeat the failures of the past—only faster. If they choose ethics and inclusion, they can redefine what responsible innovation looks like.

The future of AI is still being written.

The question is whether we write it with conscience—or convenience.

Final Reflection

“If AI is to define the future, then the future must also be fair, ethical, and inclusive.”George Gopal Okello, Programmes Director, InclusiveAIHub

At InclusiveAIHub, we believe ethical and inclusive AI is not optional. It is the foundation of trust, legitimacy, and sustainable progress.

The companies that understand this will not just lead markets. They will lead history.

GEORGE GOPAL OKELLO Programmes Director, InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


Can AI in Pharma Truly Be Inclusive? Reflections From 20 Years on the Frontlines of Global Health Innovation

Summary

Theme: AI in Pharma & Global Health Equity Focus: Bias, access, governance, and representation Core Insight: AI will not fix global health inequities unless inclusion is intentionally designed into data, governance, and deployment—especially in low- and middle-income settings.

When “Innovation” Feels Familiar

After more than 20 years implementing and overseeing 500+ international development programmes across over 100 countries, I’ve learned to be cautious whenever a new technology is described as transformational.

I’ve heard it before.

Electronic health records were supposed to fix fragmented systems. Results-based financing was meant to drive accountability. Digital health platforms promised to reach the last mile.

Each brought progress—but each also exposed an uncomfortable truth: innovation often benefits those already well served.

Today, artificial intelligence sits at the centre of pharmaceutical and global health innovation. It is accelerating drug discovery, reshaping clinical trials, and improving diagnostics at a scale we could only imagine a decade ago.

But standing where I stand—between global policy, national health systems, and communities on the ground—I keep asking the same question:

Are we building a future that works for everyone, or just a faster version of the past?

The Data Problem I’ve Seen Repeatedly

AI is only as good as the data it learns from. And in global health, data has never been evenly distributed.

In many of the countries where I’ve worked—across Africa, Asia, and Latin America—health systems are under-resourced, records are fragmented, and entire populations remain under-documented. Not because they don’t exist, but because systems were never designed with them in mind.

Yet most pharmaceutical AI models are trained on datasets drawn largely from high-income countries, urban hospitals, and populations with consistent access to care.

That imbalance matters.

I’ve seen programmes struggle when tools developed elsewhere fail to account for:

  • Co-morbidities shaped by poverty and environment
  • Different genetic, nutritional, and disease profiles
  • Gaps in longitudinal health records
  • Cultural and linguistic realities affecting care-seeking behaviour

When AI is trained on a narrow slice of humanity, it doesn’t just underperform—it systematically excludes.

And exclusion in health is not theoretical. It costs lives.

Efficiency Isn’t Equity

Much of the excitement around AI in pharma focuses on speed and savings: faster trials, reduced costs, optimised pipelines. These gains are real—and important.

But after two decades working on governance, ESG, and health systems strengthening, I’ve learned that efficiency without equity creates fragility.

A system can be technically brilliant and socially brittle at the same time.

Inclusive innovation asks harder questions:

  • Who sets the research agenda?
  • Whose data is used—and whose is missing?
  • Who benefits first, and who waits?
  • Who carries the risk when systems fail?

In too many cases, AI is introduced into health systems without being shaped by them. Local researchers are consulted late. Communities are treated as data sources rather than partners. National regulators are expected to catch up after deployment.

That’s not innovation. That’s extraction—digitised.

Infrastructure Gaps Are Governance Gaps

Low- and middle-income countries are often described as “not ready” for AI. In my experience, that framing misses the point.

The issue is not readiness—it’s investment and inclusion.

I’ve worked with ministries and NGOs eager to adopt digital tools, only to be constrained by:

  • Unreliable connectivity
  • Limited data protection frameworks
  • Short-term donor funding cycles
  • Vendor-driven solutions with little local ownership

Yet these same contexts are where AI could have the greatest impact—supporting diagnosis in overstretched clinics, improving supply chains, and enabling earlier detection of disease.

Bridging this gap requires more than technology. It requires shared governance, long-term partnerships, and ESG commitments that treat inclusion as a core responsibility—not a pilot project.

What Inclusive AI in Pharma Actually Requires

From what I’ve seen work—across health, governance, and sustainability—truly inclusive AI in pharma must:

  • Start with representation, not retrofitting Build datasets that reflect global diversity from the outset.
  • Embed local expertise Researchers, regulators, and practitioners from LMICs must be co-designers, not end users.
  • Strengthen national systems AI should reinforce local capacity, not bypass it.
  • Align with ESG principles Inclusion, accountability, and long-term social value must be measurable and enforced.
  • Be governed transparently Communities and countries must understand how decisions are made—and how harm is addressed.

Without these foundations, AI risks widening the very gaps it claims to close.

A Personal Reflection

I’ve watched too many well-intentioned innovations fail because they ignored context. I’ve also seen what happens when communities, governments, and partners are treated as equals in design—not afterthoughts in delivery.

AI in pharma holds extraordinary promise. But its success should not be measured by how quickly drugs are discovered.

It should be measured by who those drugs reach, who is protected, and who is no longer invisible.

InclusiveAIHub Perspective

At InclusiveAIHub, we believe that inclusive AI in pharma is not a moral add-on—it is a prerequisite for sustainable global health innovation.

Ethics, equity, and governance are not barriers to progress. They are what make progress durable.

AI can help reshape global health. But only if we are willing to redesign power, data, and decision-making along with it.

Because the future of medicine should not depend on where you are born—or whether your data was ever counted.

GEORGE GOPAL OKELLO Programmes Director, InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


I’ve Seen Incredible NGO Impact Go Unnoticed for 20 Years — Here’s Why Storytelling Is Now Non-Negotiable

Over the past 20 years, I’ve had the privilege of working alongside hundreds of NGOs and community groups, supporting more than 500 international development programmes across over 100 countries.

I’ve seen extraordinary things happen.

I’ve seen community health workers save lives with almost no resources. I’ve seen women-led cooperatives lift entire households out of poverty. I’ve seen local organisations achieve results that global institutions struggle to replicate.

And I’ve also seen something deeply frustrating:

👉 Much of this impact never gets seen, understood, or valued outside a final report.

Not because the work isn’t powerful — but because the story is never fully told.

The Quiet Crisis: When Impact Stays Invisible

Across the development and NGO sector, there is a quiet crisis playing out.

Organisations are doing the work, but remaining invisible in the digital space where funding decisions, partnerships, and public trust are increasingly shaped.

Over and over again, I see updates that look like this:

“We held a workshop today.” “We conducted a training session.” “We distributed supplies.”

These statements are factual. They are safe. They are easy to report.

But they don’t answer the real question that donors, partners — and communities themselves — are asking:

So what changed?

Now compare that with:

“55 women farmers increased their crop yields by 40% after gaining access to digital tools and training.”

Same programme. Same activity. Completely different level of meaning.

One disappears into the feed. The other stops people scrolling.

Why NGOs Default to Activity-Based Storytelling

After two decades in this sector, I don’t believe NGOs struggle with storytelling because they don’t care. I believe they struggle because of structural habits and legitimate fears.

1. Reporting what feels “safe”

Activities are easy to count and verify. Outcomes take reflection, analysis, and sometimes confidence.

2. Fear of overselling or “marketing”

Many NGOs worry that telling strong stories looks like bragging. But ethical storytelling isn’t exaggeration — it’s accountability.

If something changed because of your work, saying so is not self-promotion. It’s transparency.

3. Limited capacity

Many small and mid-size NGOs I’ve worked with have:

  • no dedicated communications staff
  • limited digital skills
  • no simple systems to capture stories from the field

So powerful outcomes remain buried in:

  • donor reports
  • spreadsheets
  • monitoring frameworks

Rarely reaching the people who need to hear them.

Why Storytelling Now Determines Survival and Influence

The reality has changed.

Storytelling is no longer a “nice to have” — it directly affects funding, trust, and influence.

1. Donors fund what they can understand

Clear, evidence-based stories signal:

  • competence
  • credibility
  • responsible use of resources

2. Communities deserve to see their progress reflected

When people see their achievements represented with dignity, it builds ownership and trust — not dependency.

3. NGOs must shape their own narrative

If NGOs don’t tell their stories clearly, others will — often inaccurately.

4. Digital platforms and AI reward clarity

Algorithms prioritise:

  • specific outcomes
  • human stories
  • data with meaning
  • relevance

Silence doesn’t equal neutrality anymore. It equals invisibility.

From “We Did This” to “This Is What Changed”

Over the years, I’ve helped NGOs make simple but powerful shifts.

Instead of:

  • “We trained 30 youth.”
  • “We distributed hygiene kits.”
  • “We conducted a health outreach.”

Try:

  • “30 young people gained certified digital skills, improving their employability in a competitive job market.”
  • “850 displaced families now have essential hygiene supplies, reducing infection risk during a cholera outbreak.”
  • “Mobile clinics reached 1,200 rural residents — 70% women — providing malaria screening, blood pressure checks, and childhood immunisations.”

No exaggeration. Just clarity, context, and purpose.

Practical Lessons I’ve Learned the Hard Way

If I could distil 20 years into a few principles, they would be these:

1. Always ask the “So what?” question

After every activity: What changed? For whom? Why does it matter?

2. Capture micro-stories

Small quotes, before-and-after moments, lived experiences — these are gold.

3. Use simple metrics

You don’t need complex dashboards. Percentages, comparisons, and tangible outcomes go a long way.

4. Use digital tools — including AI — responsibly

AI can help NGOs:

  • summarise reports
  • clarify messages
  • identify impact points
  • improve consistency

But it must always respect:

  • dignity
  • data protection
  • cultural context

5. Shift the mindset

Move from reporting what you did to explaining why it mattered.

When NGOs Tell Their Stories Well, Power Shifts

Organisations that embrace impact-driven storytelling don’t just look better — they become stronger.

They gain:

  • increased credibility
  • stronger donor relationships
  • greater policy influence
  • deeper community trust

As I often say:

“NGOs are not struggling because they lack impact. They are struggling because that impact is locked in reports instead of shared with the world.”

And:

“Impact is only as powerful as the story that carries it.”

Final Reflection

NGOs don’t need glossy marketing campaigns.

They need:

  • clarity
  • confidence
  • ethical, community-centred storytelling
  • impact framed in human terms

Your work is too important to remain invisible.

When NGOs translate impact into influence, they don’t just attract funding — they honour the communities they serve by making their progress visible, credible, and impossible to ignore.

GEORGE GOPAL OKELLO Programmes Director, InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


Skills Over Schools: How AI Can Reveal Hidden Talent in Marginalised Communities

Summary Box — Key Insights

  • Talent exists everywhere, but traditional hiring often overlooks it.
  • AI can prioritise skills and lived experience over formal credentials.
  • Inclusive AI requires intentional design to avoid reinforcing bias.
  • When deployed responsibly, AI empowers communities and unlocks overlooked potential.

A Personal Journey: Seeing Talent Where Others Don’t

Over the past 20 years, I’ve led more than 500 international development programmes across 100 countries, working alongside governments, NGOs, and local communities to design inclusive systems, support organisational leadership, and develop sustainable initiatives. One lesson has stayed with me: capability doesn’t always come with a certificate.

I’ve met young leaders in rural Uganda who manage community savings groups, women in coastal Kenya running informal fisheries, and youth in South Asia teaching themselves coding through mobile apps. Each had extraordinary talent, yet most formal systems—including hiring—simply overlooked them.

Traditional recruitment often rewards privilege of schooling, not lived experience. AI, if used ethically and inclusively, offers an opportunity to change that narrative.

AI: From Automation to Opportunity

AI is often framed as a tool for efficiency—scanning CVs, ranking candidates, predicting success. But what if we flipped the script? What if AI could see what people can do, rather than where they went to school?

Over the years, I’ve witnessed AI-powered programmes identifying skills in unexpected places:

1. Spotting Skills Beyond Paperwork

AI can evaluate competencies like:

  • Problem-solving
  • Adaptability
  • Communication and collaboration
  • Creativity and analytical thinking
  • Technical learning capacity

I remember a young woman in Kibera running a local savings co-operative. She had never stepped into a classroom, yet she was financially literate, organised, trusted by her community, and skilled in conflict resolution. Through AI-based scenario assessments and mobile simulations, we were able to recognise her leadership and management skills, giving her opportunities she would never have accessed through traditional recruitment.

2. Leveling the Playing Field for Non-Traditional Learners

In a pilot programme in rural Uganda, youth completed mobile-based problem-solving and pattern-recognition tasks. Many had limited formal schooling, yet they demonstrated above-average spatial reasoning, rapid learning, and adaptability—skills highly valued in tech-enabled industries.

These are exactly the people traditional hiring systems miss. AI helped us translate their informal experience into recognised competencies, unlocking opportunities for communities long overlooked.

3. Making Hidden Talent Visible to Employers

One Nairobi-based NGO replaced degree requirements for junior data roles with AI-assisted logic and literacy assessments. The results were striking:

  • 68% of selected candidates came from previously excluded communities
  • Women excelled in roles traditionally dominated by men
  • Performance metrics improved within six months

This is the transformative potential of inclusive AI: it doesn’t replace human judgment—it illuminates it.

The Risk: AI Can Exclude if Built Poorly

Technology alone isn’t neutral. Poorly designed AI can amplify existing biases, favouring privileged, urban, or highly educated populations. At InclusiveAIHub, we insist on community-informed datasets and co-design principles.

“No AI tool should decide who deserves opportunity without the lived experiences of marginalised communities guiding its design.” — Amina Dodhia, Ethical AI Specialist

Fairness audits, transparent model reporting, and localised input are non-negotiable if AI is to become a force for equity.

Practical Ways to Use AI for Hidden Talent

  1. Replace CVs with Skills-Based Assessments Short games, problem-solving simulations, or scenario-based exercises reveal true capability.
  2. Translate Field Experience into Recognised Competencies
  3. Offer Community-Friendly Credentials Digital badges, micro-certifications, or skill passports allow participants to showcase capabilities without formal schooling.
  4. Match People to Opportunities, Don’t Filter Them Out AI should recommend, guide, and highlight talent—not enforce exclusion.

Real-World Impact: From Fisherman to Data Officer

During a programme visit to Homa Bay, a small lakeside town in western Kenya, a young fisherman named Mbuta caught my attention. Despite working informally, his spatial awareness, meticulous organisation, and pattern recognition were extraordinary.

Through an AI-powered skills assessment, Mbuta’s potential as a data validation officer was recognised. Within nine months, he moved from manual labour to a formal role in a local conservation NGO—a life-changing transition enabled by technology and inclusive design.

“Mbuta is proof that capability can come from anywhere. It’s not about schooling—it’s about seeing people fully.” — Programme Manager, InclusiveAIHub

Skills Over Schools: The Future of Opportunity

Across governments, corporations, and NGOs, a shift is underway: degrees no longer predict performance; exclusion limits potential.

AI gives us a chance to rewrite the rules. It can recognise hidden brilliance, empower marginalised communities, and expand opportunity—but only if it is designed inclusively, ethically, and collaboratively.

“At InclusiveAIHub, we believe that talent is universal but opportunity is not. AI allows us to redesign hiring systems to recognise people for what they can do—not where they went to school. Our mission is simple: unlock human potential in places the world overlooks.” — George Gopal Okello, Programmes Director, InclusiveAIHub

Final Thought: Skills are the true currency of opportunity. With responsible AI, invisible talent can finally be seen—and the world becomes a fairer place for all.

GEORGE GOPAL OKELLO Programmes Director, InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


Beyond CV Scanning: Lessons from Global Development on Humanising AI in Recruitment

Summary Box: Key Takeaways

  • Topic: Ethical AI in recruitment
  • Focus: Enhancing empathy, fairness, and inclusion in hiring
  • Key Insight: AI should augment human judgment, not replace it, ensuring diverse talent is recognised and valued.
  • Editorial Quote: “AI in recruitment must complement human judgment, not undermine it. When designed ethically, it can make hiring processes fairer, more inclusive, and more empathetic.” — George Gopal Okello, Programmes Director, InclusiveAIHub

From Global Development to Fair Hiring: My Journey

Over the past two decades, I’ve had the privilege of leading more than 500 international development programmes in over 100 countries, working with governments, local communities, and organisations to strengthen governance, sustainability, and talent systems.

One lesson has remained constant: people are at the heart of every system that succeeds. Whether building public health initiatives in rural Kenya, governance programmes in Eastern Europe, or ESG frameworks in Asia, the human element has always determined impact. Technology can enhance reach, speed, and efficiency—but it cannot replace empathy, judgment, or inclusion.

This perspective shapes how I view AI in recruitment. Too often, organisations treat algorithms like CV-scanning robots, automating decision-making without asking: who gets left out? who is unfairly favoured? whose potential are we missing?

AI doesn’t have to be that way.

From Automation to Augmentation

Traditional recruitment AI is designed for efficiency: filtering candidates, ranking CVs, and predicting performance based on historical data. But history is not neutral—it reflects past biases and structural inequities. The result? Candidates with unconventional paths, non-linear careers, or diverse experiences are often overlooked.

Drawing from my experience designing inclusive programmes across continents, I know that diversity and unconventional experience drive innovation and resilience. Ethical AI in recruitment can highlight these overlooked talents.

By anonymising applications, flagging biased language in job descriptions, and highlighting overlooked competencies—like adaptability, cross-cultural experience, and emotional intelligence—AI can expand recruiters’ perspectives, giving them a fuller picture of each candidate.

“In international development, I learned that systems succeed when they value the whole person, not just a paper credential. Recruitment AI should do the same.” — George Gopal Okello

Building Empathy Into the Candidate Experience

Humanising recruitment goes beyond fairness—it’s about reducing stress and enhancing respect for candidates. Job seekers often face impersonal, opaque, and exhausting processes. AI can help bridge this gap by:

  • Providing timely, consistent feedback
  • Guiding candidates through application steps
  • Helping hiring managers understand the emotional impact of their communication

When designed inclusively, AI doesn’t replace the human touch—it amplifies it, ensuring every candidate feels seen, valued, and respected.

Challenges and Considerations

Ethical AI isn’t a magic solution. Algorithms are only as unbiased as the data they learn from. Local context, cultural nuances, and individual circumstances must be factored into design and deployment.

In my years of leading cross-cultural programmes, I’ve seen the cost of ignoring local realities: interventions fail, communities disengage, and trust erodes. Recruitment AI must avoid the same trap: technology must serve the people, not the process.

“AI requires careful design, ongoing evaluation, and alignment with human values to truly serve candidates and organisations alike.” — George Gopal Okello

The Future: Human-Centred, AI-Augmented Hiring

The real opportunity lies not in replacing humans but in empowering them to hire more fairly, inclusively, and empathetically. Organisations that implement ethical AI practices in recruitment will see benefits beyond fairness:

  • Stronger trust from candidates and employees
  • Broader access to talent from diverse backgrounds
  • Reduced bias and improved retention
  • Enhanced organisational reputation

AI should unlock potential, not block it. Just as I’ve designed programmes that centre local communities and stakeholder voices for lasting impact, recruitment AI must centre people first.

“When done right, AI can be a force for inclusion, empathy, and fairness. It’s not about automating decisions—it’s about amplifying humanity.” — George Gopal Okello, InclusiveAIHub

GEORGE GOPAL OKELLO Programmes Director, InclusiveAIHub

📌InclusiveAIHub is currently an independent initiative – donations support content creation, research, and operating costs.


Powered by WordPress.com.

Up ↑