Algorithm and the Poor: How Technology Fails the Vulnerable

By Realistiqthinker

Where This Reflection Begins…,

I have sat with people who had nothing.

Not “nothing” in the casual way we use the word. I mean nothing in the real sense, no reliable income, no safety net, and no powerful person to call when things fall apart.

I have watched them navigate systems that seem designed to exhaust them. Step by step, form by form, they are pushed toward silence. Over time, their dignity begins to erode. Institutions stop seeing them as human beings and start treating them as problems to be managed.

Recently, I thought about them again.

This time, it was because of an algorithm.

The Rise of Quiet Decision-Makers

Somewhere, silently and efficiently, an algorithm was deciding which families deserved welfare support, and which did not.

No one elected it.
No one debated it in parliament.
No one asked the communities it judged a single question.

Yet, it decided. Quickly. Confidently. And often, wrongly.

At first glance, this might seem like progress. However, beneath the surface, something deeper is unfolding.

The Promise We Were Sold

When artificial intelligence entered public systems, welfare, healthcare, education, housing, we were told a comforting story.

Algorithms, we heard, are objective. They remove human bias. They treat everyone equally. They are faster, cheaper, and more consistent.

It sounded reasonable.
In fact, it sounded progressive.

But it was wrong.

And more importantly, the people who paid the price were not the ones building these systems or presenting them at conferences.

It was the poor. The marginalized. The already vulnerable.

Garbage In, Injustice Out

To understand the problem, we need to look at how these systems actually work.

Artificial intelligence learns from historical data. It studies patterns from the past and uses them to make decisions about the future.

At first, that sounds neutral.

But it is not.

Our history is not neutral. It carries centuries of inequality, discrimination, and exclusion. Therefore, when an algorithm learns from that history, it also learns its injustices.

For example, hiring systems trained on past data begin to associate certain zip codes, names, or schools with “success.” In reality, they are simply reproducing the biases that shaped those patterns.

The algorithm does not know it is being unfair. It is only being consistent with an unfair past.

As a result, a qualified young man from a poor neighborhood may never get a chance. He applies. He is capable. He is ready.

Yet the system filters him out in milliseconds.

No explanation.
No conversation.
No appeal.

Just silence.

What I Witnessed in Pakistan

During fieldwork in Pakistan, I saw this reality up close.

Development programs increasingly relied on data-driven tools to identify who was “most vulnerable.” On paper, this looked efficient. In practice, it was neither efficient nor fair.

Many people simply did not fit into the system.

Widows without documentation became invisible. Communities with complex forms of poverty went unrecognized. Their suffering was real, yet the system could not measure it.

The algorithm said they did not qualify.

However, the human being standing in front of me told a completely different story.

This is not a technical glitch.

It is a moral failure, hidden behind the language of efficiency.

The Distance Between Builders and Lives

Now consider this: who designs these systems?

If you walk into a technology company, you will find brilliant minds. Highly skilled engineers. Talented data scientists.

But rarely will you find someone who has lived the reality these systems aim to manage.

Rarely will you meet someone who has sat with a hungry family. Or listened to a community that has lost trust in institutions over generations.

This is not about blaming individuals.

Instead, it is about recognizing a structural gap.

The people building these systems and the people affected by them live in completely different worlds. That distance creates systems that are technically advanced—but humanly blind.

The Accountability Vacuum

When a human makes a bad decision, there is at least a chance for accountability.

You can ask questions.
You can appeal.
You can demand answers.

But when an algorithm makes a decision, things change.

Responsibility becomes unclear.

Is it the engineer? The manager? The government? The data scientist?

In reality, it often becomes no one.

The decision disappears into complexity, and the person affected is left without recourse.

For the wealthy, this may feel like inconvenience.

For the poor, it can mean losing everything, housing, support, even dignity.

This Is Not About Technology

Let us be clear.

This is not an argument against technology.

Technology has done really good. It has connected communities, amplified voices, and exposed injustices that once remained hidden.

However, we must stop pretending that artificial intelligence is neutral.

Every algorithm reflects values.
Every system distributes power.
Every decision creates winners and losers.

So, the real question is not whether AI should exist.

The real question is this:

Who is ensuring that it is just?

What Needs to Change

First, the people most affected must have a voice. Not as an afterthought, but as participants in design and decision-making.

Second, systems must be explainable. People deserve to understand why a decision affects their lives, and they must have a real way to challenge it.

Third, proximity matters.

Those who build these systems must step outside their environments. They must listen—not as observers, but as fellow human beings.

Finally, ethics cannot remain on the sidelines.

Questions of fairness, justice, and accountability do not belong only to engineers. They belong to all of us.

A Principle We Cannot Ignore

There is a simple idea in development work:

Nothing about us without us.

It sounds obvious. Yet it is rarely practiced.

As artificial intelligence becomes part of everyday life, this principle becomes urgent.

Because today, systems decide who receives help, who gets watched, and who gets left behind.

A Final Thought

The poor do not need faster systems of exclusion.

They need systems that see them, fully and humanly.

If technology cannot do that, then it is not intelligent.

It is only fast.

About the Author

Realistiqthinker is an independent thinker and writer with a background in philosophical and ethical studies, theological ethics, and international development. He holds a Certified Monitoring and Evaluation Professional qualification and has completed studies in Artificial Intelligence. His fieldwork experience spans community development contexts in Pakistan and East Africa. He writes at the intersection of philosophy, human dignity, social justice and emerging technology — asking the questions that our increasingly automated world urgently needs to face.

Leave a Reply

Your email address will not be published. Required fields are marked *