04 · Responsible AI

Human-in-the-loop in Public Sector AI

AI has the potential to radically improve public services. It can also cause damage that is difficult to repair. The difference lies in who makes the decisions—and whether citizens have a right to appeal.

Topic: AI · Ethics · Public Sector By: Sofis Solutions Reading time: ~5 min

The temptation is strong: if a model can classify files better than a civil servant, why not let it decide? If it can detect fraud faster, why not hand over the reins? The answer is not technical; it is institutional: decisions that affect citizens’ rights cannot be delegated to a model. They can be assisted, accelerated, and improved—but not delegated.

Why public opinion is different

When a commercial system makes a mistake, there is a complaint, a refund, or a change of provider. When the government makes a mistake, a right is violated: a pension denied, a scholarship lost, or a person placed on a watchlist without their knowledge.

Public decision-making has three distinctive features:

An opaque model, trained on data that may reproduce historical biases, does not meet any of the three criteria on its own.

The model we advocate: human-in-the-loop

At Sofis, we design AI solutions for the government under a single guideline simple:

“AI makes the recommendation, the official makes the decision, and the citizen can appeal.”

AI does the heavy lifting: it categorizes, makes suggestions, sets priorities, identifies patterns, and summarizes lengthy files. This frees up the employee’s time to do what a model cannot: apply judgment, provide context, listen to the person on the other end, and see what isn’t in the data.

The final decision, when it concerns rights, is made by a human being and is documented.

Traceability, explainability, appealability

Three principles guide every design:

Where automation is possible

Not every government task requires human involvement. There are tasks where responsible automation speeds up the process without compromising rights:

The key is to distinguish between administrative decisions (yes, automate with oversight) and substantive decisions regarding rights (no, keep humans in the loop).

In practice

Sofis has an AI Management Policy that governs the use of AI both internally and in client projects. Any system that incorporates AI for sensitive decisions is designed with traceability, explainability, and appealability in mind from the specification stage, not afterward.

Why does this matter?

Because AI is going to rapidly permeate the government, and because the line between “assisting public servants” and “replacing them” can be crossed without anyone noticing—until a citizen finds themselves on the wrong side of an algorithm.

Placing people at the center of the process does not hinder innovation; rather, it is the prerequisite for ensuring that innovation in the public sector is compatible with human rights. And at Sofis, we believe that this compatibility is non-negotiable.

Continue reading

See other insights

Other perspectives that intersect with our work with the government.