As developers, we’re often taught to treat fairness like a math problem.

Balance the dataset.

Reduce bias.

Optimize the outcome.

But here’s a question I’ve been wrestling with:

Is fairness universal — or is it cultural?


Growing up, I saw fairness differently

I’m from Nigeria.

Where I grew up, fairness didn’t mean “treat everyone exactly the same.”

It often meant “consider people’s different realities.”

Think about it:

If two students show up late to class, one because of traffic, the other because they were helping their parents in the market, do we punish them both the same way?

Same rule, different context.

And maybe fairness means acknowledging that.


AI doesn't always do that

Most AI systems we build today encode a Western interpretation of fairness:

🧮 Group fairness

⚖️ Individual fairness

📊 Statistical parity

These are important, but they’re not always enough.

Take this real case:

A credit scoring algorithm in Kenya failed to recognize community-based lending traditions, like rotating savings groups (ROSCAs).

As a result, reliable borrowers were marked “high-risk” because the system didn’t understand local context.

Fair model?

Accurate data?

Maybe.

Fair outcome?

Not really.


Developers, we need to ask harder questions

If you’re working on AI, especially models that affect people’s lives, I urge you to consider:

  • Whose values are we embedding into our models?
  • Are we treating fairness as a checklist or a conversation?
  • Can our systems adapt to different cultural realities, not just datasets?

As I prepare for a PhD, I’m committed to asking these questions, and building models that reflect local values, not just global assumptions.

Because true fairness in AI might not come from the top down, but from the Global South outward.


💬 What’s your take?

Have you ever worked on an AI project where fairness was tricky to define?

Do you think AI models should adapt to different cultures, or aim for universal rules?

Let’s talk in the comments 👇🏽