
2026
The Human-AI Operating Model
Most large enterprises have a lot of AI in the environment. They have copilots in their suites of productivity tools. They have model-backed search and retrieval. They have early agents in service, operations, and support. They have automation wins, dashboards, proofs of concept, and a committee for governance. What they lack is a system for how it’s supposed to all work together. AI capability is being accumulated much faster than enterprise coordination. That’s the new management challenge. Microsoft’s Work Trend Index for 2025 is frank about the extent of the shift. It’s based on insights from 31,000 Microsoft workers in 31 countries, LinkedIn’s labor market trends, and trillions of Microsoft 365 productivity signals. It sees a new organizational template emerging centered on “hybrid” teams of people and agents, driven by intelligence on demand.
That’s why the right capstone question is no longer whether an enterprise has adopted AI. The right question is, has it designed the system through which the work of humans and machines is structured, governed, measured, and enhanced together? This article proposes to name this system through which humans and AI work together the Human AI Operating Model. This is a synthesis term, and synthesis terms are used in this series. It is not necessarily one that is in common use outside of this series, although the phenomenon to which it refers is quite well supported in the source material. Microsoft has named the emerging form the Frontier Firm. McKinsey has named the new paradigm the agentic organization and has written, "Organizations are evolving toward new models in which humans and virtual or physical AI agents will work side by side to produce value." Operating model, governance, and workforce are not secondary questions in McKinsey’s world.
That is an important differentiation because tools do not build operating models. They build local capability. An operating model determines how that capability is converted into disciplined execution. Research by Microsoft indicates that 82 percent of executives see this as a crucial year to think anew about strategy and operation, and 81 percent are confident that agents will be somewhat or substantially integrated into the company’s AI plans in the next 12 to 18 months. So the clock is already running. Companies that continue to think of AI as a soft layer of tools are not going to remain in the “early” category very long. They are just going to become structurally misaligned. McKinsey makes a similar argument from a somewhat different perspective in their subsequent work on the six shifts that are necessary for agentic organizations in the future, including the need to rewire the workflows and the roles, skills, structures, and systems that hold the enterprise together.
A proper Human-AI operating model begins with work architecture. Most organizations continue to primarily organize work through jobs, reporting structures, and organizational boxes. This is necessary for accountability but is now insufficient for execution. The Work Trend Index by Microsoft points to “Work Charts,” where people organize around outcomes rather than being locked within static organizational structures. McKinsey goes a step further to say that the highest value will not come from adding copilots to existing processes but rather through end-to-end design that is AI first by design. This means that the implication for the enterprise is to rethink the path of work, rather than just the role catalog around it. This includes determining where humans should lead, where AI should augment, where agents should work, and where the system should stop and hand over to a person.
The second requirement is delegated authority. As soon as we have the AI acting in live workflows, we have to consider questions that traditional technology governance may not have been precise enough to address. For example: what classes of actions are delegable? Under what thresholds? Under what contexts? What is advisory, and what is executable? The National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework (RMF) is a key consideration in this regard, as it considers governance to be a cross-cutting discipline throughout the process, rather than a single event. The NIST framework’s structure of Govern, Map, Measure, and then Manage is also relevant. The NIST Core states that processes regarding human oversight must be defined, measured, and documented. This is not just boilerplate governance stuff. This is a direct call to consider delegated machine action in terms of explicit design, explicit limits, and explicit accountability.
The third requirement is runtime supervision. This is where many organizations are still not prepared. They know how to approve a use case. They’re not as prepared to run one when it’s dynamic. Anthropic’s advice for effective agents is helpful in part because it makes a clear distinction between workflows and agents. A workflow is a series of steps in code. An agent is dynamic in its own process and tool usage. They also say that the most effective agents are built from very simple patterns, not complex frameworks. They also caution to choose the simplest solution that actually works. This is important because runtime supervision gets harder when the agents are flexible. A Human-AI Operating Model needs to not only specify what agents can do, but also how humans can observe, inspect tool usage, and stop agents when necessary. Otherwise, oversight is just for show.
The fourth requirement is the design of human-AI collaboration. The future enterprise is not simply one where machines increasingly contribute and humans contribute less. Rather, it’s a future where there’s a shift in contribution. Microsoft’s human-agent-team framework helps illustrate this point. So does McKinsey’s agentic-organization research, where value creation with humans and agents is at the heart of the next operating model. In short, AI systems contribute with regard to information gathering, alternatives generation, anomaly detection, output generation, and speeding up a sequence. Humans contribute with regard to judgment, interpretation, exception handling, tradeoffs, and accountable authorization. The operating model isn’t working if there’s ambiguity with regard to contribution. The enterprise needs to figure out how the contribution of humans and machines, together with how they work together, happens.
The fifth requirement is telemetry and management review. An operating model that can’t be measured can’t be managed. NIST’s RMF puts measurement at the center, not on the periphery. ISO/IEC’s 42001 standard builds on that by defining AI governance as something that organizations need to establish, implement, maintain, and improve. That’s not just management-speak; it’s significant. It means that the enterprise can’t just keep AI governance as a static ruleset. It requires ongoing evidence, such as workflow, overrides, interventions, output, patterns, and business outcomes. These are the signals that allow the business to distinguish between speed and fragility. Without them, the business will conflate the two.
The sixth requirement is workforce adaptation as a operating variable, not a side program. This is where many AI strategies still underestimate the challenge. The Future of Jobs Report 2025 from the World Economic Forum states that skill gaps are the largest barrier to business transformation for 63 percent of employers, 59 percent of the global workforce will require training by 2030, and 39 percent of workers’ core skills are expected to change by 2030. The 2025 AI Jobs Barometer from PwC makes the speed of change even more difficult to wave away. The speed of skill changes in AI-exposed jobs is 66 percent higher than other jobs, and three times more revenue growth per employee is seen in industries more exposed to AI. This means that designing the workforce is no longer a step downstream from designing the operating model. It’s part of it. The enterprise won’t transform successfully if it fails to adapt roles, skills, incentives, and management expectations as fast as it changes its workflows and control logic.
If you combine those two, the architecture starts to make sense. A Human-AI operating model is not simply a “glorified” governance model, nor is it necessarily a “glorified” technology stack. It’s the integration that ties together work design, delegation, runtime control, collaboration design, telemetry, and workforce adaptation, all within a single operating environment. This is where the “control plane” concept becomes relevant, though it’s another synthesized term, just like the other terms used throughout this series. A control plane is necessary within an operating model, and it’s necessary within an architecture. It’s necessary within an enterprise. The research from Microsoft’s Frontier Firm, McKinsey’s agentic organization model, NIST’s lifecycle governance, and ISO’s management system definition all seem to be trending in this direction, though they don’t necessarily use the same language.
This also helps to account for the fact that so many of these AI applications have the appearance of greater width than depth. They have local wins but lack coordinated architecture. They have agents but lack role redesign. They have dashboards but lack management discipline. They have policy documents but lack runtime enforcement. They have diffuse usage but lack a stable model of how work is meant to operate. This doesn’t drive transformation. This drives partial capability with dubious control. McKinsey’s observation that add-on copilots typically deliver only limited productivity benefits and rarely impact the profit and loss statement must be interpreted in this fashion. Enterprises don’t need more surface area with their applications and their AI. They need a better system of executing their AI.
That’s the last point in the series. The key question is not whether AI is in the environment. It is whether the enterprise has built an operating model to consume it. Enterprises that achieve this will be in a vastly better place to scale automation without sacrificing accountability, agents without surrendering control, and work without blurring ownership. Those that don’t will continue to build ever-more powerful tools in ever-more weak systems. The line between scalable performance and managed chaos is what will separate the next wave of enterprise transformation.