Compounding risks do not bring down systems… but institutional isolation

By Enrique Alfredo González Huitron

For decades, organizations have tried to manage risk by breaking it down into pieces. I assigned cybersecurity to one team, technology to another team, compliance to a third team, and personnel management to a fourth team. This approach worked—albeit temporarily—when the risks were isolated, linear, and predictable. But that world no longer exists.

Today, threats do not come one after another, and organizational structures are not respected. They are complex risks: interdependent interactions between technological, human, operational, economic and geopolitical factors, mutually reinforcing and amplifying each other's impacts. And the World Economic Forum’s Global Risks Report 2026 clearly confirms this: We have entered an era in which risks not only accumulate, but interact, generating sequential and linear effects that are increasingly difficult to anticipate or contain.

In this context, the most important question is no longer whether the organization has enough controls, tools, or policies. The real question is: can you work in a coordinated, flexible, and multidisciplinary way? Today, the most effective form of prevention — and resilience — is not more technology, but greater institutional harmony.

Regulatory isolation (cellulose) not only slows implementation; it exacerbates compound risks. When each section sees only its part of the problem, no one sees the system as a whole. Technology teams deploy solutions without adequate understanding of human impact. Security teams enforce controls that create friction in day-to-day operations. HR manages talent without a clear vision of automation or AI-driven decision-making. And senior management makes strategic decisions based on fragmented information.

This dynamic has been widely documented by Harvard Business Review. In the essay “Cellulose Mindset: How to Break Barriers” (Paul A. Thompson, 2015), the journal explains how cellulose impairs the quality of decision-making, reduces the speed of institutions, and creates blind spots, especially in complex environments. And in the world of compound risk, those blind spots quickly become operational, security, and auditory vulnerabilities.

The World Economic Forum reinforces this perspective by emphasizing that most critical global risks no longer belong to a single category. Cybersecurity is inseparable from geopolitics. And artificial intelligence intersects with social stability and trust. And digital infrastructure is directly linked to the legitimacy of institutions. Managing these risks in isolation from one another is an outdated approach — and potentially a risk.

More than any other technology, artificial intelligence embodies the duality of being both a potential and a risk multiplier. According to the Global Risks Report 2026, AI acts as a system accelerator: it can significantly improve productivity and decision-making, but it can also amplify misinformation, fraud, operational errors, and bias, if deployed without proper governance.

The WEF white paper Latin America in the Age of Intelligence: A New Path to Growth (prepared in collaboration with McKinsey & Company, 2026) expands this idea, asserting that the true value of AI stems not from isolated efficiency gains, but from reimagining processes from beginning to end, provided that the human remains at the heart of the process, and that controls are built from the beginning.

This conclusion intersects with McKinsey's findings in the report "Why AI Transformations Fail—and How They Work" (McKinsey & Co., 2023). Research shows that most AI initiatives fail not because of technical constraints, but because organizations treat AI as a separate project, not as a transient capacity for functionality. And when AI is implemented inside silos, it brings marginal benefits, while dramatically increasing systemic risk.

And a recurring pattern emerges in major cyber incidents, operational failures, and reputational crises: individuals are rarely the weakest link. They are often the least well-integrated. Employees fail not because of a lack of competence, but because they work within poorly designed systems—systems with conflicting incentives, unclear responsibilities, and tools that actually ignore workflow. And the World Economic Forum consistently emphasizes that smart economies only work when people are placed at the center, supported by digital knowledge, continuous training, and active participation in transformation initiatives.

This proposition finds its echo in the Harvard Business Review article "Technology doesn't drive change—people do it" (Ashley Goodall, 2019), which offers a simple, often-neglected idea: technology alone doesn't reduce risk. Without human alignment, they redistribute risk in less obvious ways—and often more dangerous.

And so cybersecurity and artificial intelligence are not purely technical challenges, but organizational, cultural, and leadership challenges at their core.

And this is where frameworks such as ISO/IEC 42001:2023 come into play. Without going into technical details, this standard offers a fundamental principle: AI should be governed as an integrated management system, consistent with business strategy and risk management, with a clear definition of organizational responsibilities.

ISO/IEC 42001 focuses on several key ideas for mitigating AI risks, including:

Systematic assessment of AI risks and impacts.

Clear roles and accountability.

Integrate AI into existing business processes, rather than parallel structures.

Continuous improvement and institutional learning.

The real value here is not in the degree itself, but in the philosophy behind it: AI cannot be governed by one team or one job. Effective governance requires cooperation between technology, security, law, business and human roles.

When individuals, technology, and cybersecurity work separately, the result is cumulative and fragile. When they work in harmony, the impact is multiplied.

This transformation requires:

Truly multidisciplinary decision platforms.

A common language between technical teams and business teams.

Common measurement indicators assess value creation, not just compliance.

Processes designed to anticipate failure, not just respond to it.

McKenzie summarizes this dynamic in the "Organizing for an Era of Urgency" report (McKenzie Quarterly, 2020), which shows that the most resilient institutions are not those with the most controls, but those that learn and coordinate faster under pressure. This ability to learn is an emergent property of institutional harmony.

Breaking cellulose is no longer just a matter of efficiency, but a matter of survival. The world in which we live is governed by complex risks; And so the fragmentation itself becomes structurally fragile. Investing in advanced technology or tightening security policies is insufficient if individuals, processes, and decision-making remain separate.

The real strategic necessity today is to build institutions capable of thinking and acting as integrated systems, where artificial intelligence acts as a conscious enabler, cybersecurity supports business resilience, and the human being remains the core of intelligent decision-making—the human being in the loop, always.

Harmony is no longer an ambitious concept, but the minimum necessary to act quickly, flexibly, and confidently in the contemporary risk landscape.

comments