To solve the challenges of organizations and communities through intelligent, secure, sustainable, and people-centered solutions, so they generate real value in their social and productive contexts.
VisionTo be the chosen company by organizations seeking to innovate with quality, purpose, and trust in the intelligent era.
Learn more
Sofis Solutions was born in 2005, in the city of Montevideo - Uruguay.
Since its inception, the main driver was and remains quality.
This applies to processes, products, and relationships with the environment.
The internationalization of the company It was one of the founding objectives. In the first stage, it expanded from Uruguay, and in the second stage, it opened offices in Latin American countries. Currently, it has offices in Montevideo, Panama, El Salvador and Ecuador.
CMMI-DEV-3
More informationNational Quality Award
More informationISO 9001:2015
Quality Management SystemISO 37001:2016
Anti-Bribery Management SystemISO 14001:2015
Environmental Management SystemSofis Solutions integrates environmental, social, and governance (ESG) principles into its management and operations, driving sustainability through Digital Transformation. Its strategic approach prioritizes energy efficiency, digital inclusion, and transparency in digital governance, contributing to the responsible development of organizations.
Digital Patrols, Ecuadorian Bovine Information System, Easy Budget UY, Digital Portfolio, SIGES Teachers App, SIGES Parents App.
Learn more
It is an initiative by Sofis Solutions, from the Intelligent Solutions Division, that promotes the adoption of artificial intelligence as a key driver of efficiency and effectiveness in the intelligent era.
It integrates both administrative and operational processes, promoting an organizational evolution where technology amplifies knowledge, optimizes decision-making, and generates value in a sustainable and inclusive way.
More information
In this interview, we speak with the Sofis Solutions Software Engineering Group, a team that has been actively working on the evolution of its development practices in the face of the incorporation of generative artificial intelligence. With a trajectory marked by continuous improvement and the application of reference frameworks such as CMMI-DEV, the group shares its vision on how software engineering is transformed when AI moves from being an experimental tool to becoming an integrated component of the process.
Throughout the interview, we address the impact of AI on the software life cycle, the tools developed by the organization, governance and decision-making challenges, and how metrics and quality practices are being adapted in a context of AI-augmented software engineering. The goal is to offer a clear and practical view on how to combine technological innovation with process discipline and a focus on value.
Generative AI has had a cross-cutting impact throughout the entire software development life cycle, mainly by accelerating activities and changing the way teams work and make decisions. Today it acts as a permanent assistant that accompanies the process from the initial stages through product maintenance.
For example, in requirements analysis and definition, it facilitates moving from natural language to preliminary technical artifacts, allowing teams to reach a shared understanding of the problem more quickly. In design and architecture, it helps explore alternatives and identify early risks, functioning as support for the team’s judgment rather than as a source of automatic decisions.
During implementation, the impact is very visible in terms of productivity, as it enables the generation of base code, understanding of legacy systems, and support for refactoring tasks.
At Sofis Solutions, we have developed our own tools aimed at integrating generative AI in a practical and governed way into the development life cycle.
On the one hand, ReqGen, our intelligent tool for requirements discovery, analysis, and specification, supports the early stages of the life cycle by facilitating the generation and refinement of requirements from natural language. Its focus is on improving clarity, traceability, and consistency of early artifacts, reducing ambiguities from the outset.
On the other hand, Sofian acts as a cross-cutting assistant for engineering teams. It supports analysis, development, and maintenance tasks, helping to understand code, propose improvements, and accelerate problem resolution, always under the supervision and judgment of the team.
Both tools reflect our vision of AI as an enabler: it does not replace people, but rather enhances the teams’ ability to work better, with higher quality and a focus on value.
The application of CMMI-DEV practices provides a framework of order, consistency, and governance that is key when software development is accelerated by generative AI. AI makes it possible to do more and faster, but CMMI ensures that this “faster” does not come at the expense of quality, traceability, or product sustainability.
In an AI-assisted process, CMMI helps maintain clarity in requirements management, change control, and alignment with business objectives, preventing automatically generated artifacts from becoming noise or technical debt. It also reinforces the definition of roles and responsibilities, which is essential when AI becomes another actor within the engineering process.
In addition, CMMI strengthens measurement and analysis, enabling an objective evaluation of how AI impacts productivity, quality, and risk. This supports data-driven decision-making rather than relying solely on perceptions. Ultimately, the combination of CMMI-DEV with generative AI allows the use of these technologies to be scaled in a controlled, repeatable, and reliable manner, while maintaining a focus on engineering excellence and value delivery.
Governance and decision-making become central aspects when AI becomes an active part of the development process, and this is where CMMI provides differential value. AI introduces speed and generative capacity, but also new risks: implicit decisions, lack of traceability, or excessive dependence on automatic outputs. CMMI helps frame this usage within clear rules.
From a governance perspective, CMMI establishes which decisions can be assisted by AI and which must remain under the explicit responsibility of people. This allows AI to be used as technical support without diluting accountability or the professional responsibility of the engineering team.
In terms of decision-making, CMMI reinforces the systematic use of objective information. In an AI context, this implies not only measuring the product and the process, but also observing how AI impacts timelines, defects, rework, or technical debt. Decisions move away from intuition or technological enthusiasm and toward evidence-based reasoning.
Finally, CMMI promotes a progressive and controlled adoption of AI. Instead of incorporating tools in an isolated or reactive way, they are integrated into defined processes, with evaluation criteria and continuous improvement. In this way, AI becomes a governed strategic asset rather than a risk factor for software quality or sustainability.
The incorporation of generative AI has expanded and refined the process metrics model rather than replacing it. With CMMI as a reference framework, traditional metrics remain valid, but are now complemented with new perspectives aimed at understanding the real effect of AI on process performance.
On the one hand, classic measurements of productivity, quality, and compliance are maintained, but there is now a need to distinguish which part of the outcome is AI-assisted and how that impacts time, rework, or defects. This helps avoid misleading interpretations, such as assuming improvements without objective evidence.
On the other hand, new metrics emerge related to the responsible use of AI, such as the stability of generated artifacts, the level of human intervention required, or the frequency of subsequent corrections. These measurements help decide where AI provides the greatest value and where its use should be limited.
In this context, the metrics model ceases to be merely a control mechanism and becomes a tool for organizational learning. It enables the adjustment of practices, improvement of AI governance, and the support of data-driven decisions aligned with the continuous improvement promoted by CMMI.
The level of human intervention is measured indirectly, based on process evidence rather than individual perception. The focus is on understanding how much of the work generated with AI support requires corrections, validations, or rework to meet the expected standard.
In practice, for example, we observe how many iterations are needed between an AI-assisted output and its final acceptance, the volume of manual changes made to generated artifacts, and the additional time invested in review and adjustment. These indicators allow us to infer the real degree of AI autonomy in each activity.
In addition, the stability of results is analyzed: if an AI-generated artifact requires minimal revisions and remains stable over time, human intervention was mainly validation; if, on the other hand, it requires frequent adjustments or later corrections, the intervention was substantial.
This approach enables informed decisions: identifying activities where AI already provides value with low supervision cost and others where greater human control is still necessary, aligning the use of AI with process quality and continuous improvement principles.
We envision the next year as a stage of consolidation and maturity, rather than experimentation. AI is already embedded in day-to-day development; the challenge now is to deepen its use within CMMI practices in a consistent, measurable, and governed way.
From the CMMI perspective — in line with the guidelines promoted by the CMMI Institute — the focus will be on strengthening the definition of processes where AI is an explicit enabler, clarifying responsibilities, acceptance criteria, and human control points. It is not just about “using AI,” but about clearly establishing how, when, and for what purpose it is used within the standard engineering process.
We also expect strong evolution in measurement and analysis. The work will move toward more refined metrics models that allow us to assess the real impact of AI on productivity, quality, and sustainability, and to use that information for decision-making at both project and organizational levels.
Finally, we foresee a deepening of continuous improvement: periodically reviewing which AI-assisted practices work, which generate risks, and how to adjust them. In this sense, CMMI continues to be the framework that allows us to scale AI-augmented software engineering without losing control, quality, or focus on business value.
In this interview we talked with the Software Engineering Group of Sofis Solutions, a team that has been actively working on the evolution of its deve...
On November 20th, the pilot edition of Creative Bureaucracy UY 2025 took place at the Sala Verdi, the local precursor to the Creative Bureaucracy Fest...
Between October 14 and 16, 2025, the XI Forum of Government Accounting Offices of Latin America (FOCAL) was held in Santiago, Chile, a regional refere...