The temptation is strong: if a model can classify files better than a civil servant, why not let it decide? If it can detect fraud faster, why not hand over the reins? The answer is not technical; it is institutional: decisions that affect citizens’ rights cannot be delegated to a model. They can be assisted, accelerated, and improved—but not delegated.
Why public opinion is different
When a commercial system makes a mistake, there is a complaint, a refund, or a change of provider. When the government makes a mistake, a right is violated: a pension denied, a scholarship lost, or a person placed on a watchlist without their knowledge.
Public decision-making has three distinctive features:
- Democratic legitimacy: Citizens must be able to understand how and why a decision was made.
- Accountability: There must be an identifiable person in charge, not an algorithm.
- Right to Appeal: Any decision that affects rights must be subject to review.
An opaque model, trained on data that may reproduce historical biases, does not meet any of the three criteria on its own.
The model we advocate: human-in-the-loop
At Sofis, we design AI solutions for the government under a single guideline simple:
AI does the heavy lifting: it categorizes, makes suggestions, sets priorities, identifies patterns, and summarizes lengthy files. This frees up the employee’s time to do what a model cannot: apply judgment, provide context, listen to the person on the other end, and see what isn’t in the data.
The final decision, when it concerns rights, is made by a human being and is documented.
Traceability, explainability, appealability
Three principles guide every design:
- Traceability: Every suggestion in the model is recorded along with the data that generated it.
- Explainability: The official must be able to understand why the model made the suggestion it did, at least in functional terms.
- Right to Appeal: Citizens affected by an AI-assisted decision have the right to request a human review—and the system must facilitate this.
Where automation is possible
Not every government task requires human involvement. There are tasks where responsible automation speeds up the process without compromising rights:
- Verification of compliance with objective requirements.
- Automatic document renewals when there are no changes.
- Scheduling, notifications, reminders.
- Initial sorting of files for human prioritization.
- Pattern detection, which is then investigated by a human.
The key is to distinguish between administrative decisions (yes, automate with oversight) and substantive decisions regarding rights (no, keep humans in the loop).
In practice
Sofis has an AI Management Policy that governs the use of AI both internally and in client projects. Any system that incorporates AI for sensitive decisions is designed with traceability, explainability, and appealability in mind from the specification stage, not afterward.
Why does this matter?
Because AI is going to rapidly permeate the government, and because the line between “assisting public servants” and “replacing them” can be crossed without anyone noticing—until a citizen finds themselves on the wrong side of an algorithm.
Placing people at the center of the process does not hinder innovation; rather, it is the prerequisite for ensuring that innovation in the public sector is compatible with human rights. And at Sofis, we believe that this compatibility is non-negotiable.