top of page

2026

Capability at the Speed of Work

AI is not only changing the way employees learn. It is also changing the way work is executed, the way decisions are governed, and the way enterprise competence is defined. This is why the next-generation enterprise doesn't need a more modern learning stack. What it needs is a Workforce Enablement stack. In this stack, the way to build workforce competence is not through courses, content, and training. It is through workflow, guidance, readiness, AI execution, and live telemetry.

 

The work model changed

That’s important because the experimentation phase is over; AI is now here to stay. According to the World Economic Forum, “organizations are starting to integrate AI into their core enterprise workflows and the real opportunity now is to rethink how work is performed, how decisions are made, and how operating models are designed.” Gartner puts it this way: “Business units that redesign how work is done, versus simply implementing AI and encouraging people to use it, are twice as likely to achieve revenue goals.” In other words, the problem is no longer access to the model; the problem is redesigning the work.

 

The old Learning and Development was designed for a very different operating environment. It was designed on the assumption that humans would do most of the work themselves, hold most of their procedural knowledge within themselves, and become proficient through classes, content, and finally application. McKinsey suggests that "the future of learning is not about adding one more layer of training on top of work; it is about redesigning work itself as a form of development." This should prompt a strategic rethink, because if work and learning are becoming one, then a training organization structured on learning events, no matter how necessary, is no longer sufficient.

 

The pressure on this legacy model is already visible in the data. An OECD analysis shows that "one in three job vacancies is classified as having high AI exposure," but only a small percentage of the training courses analyzed, between 0.3% and 5.5%, across four countries, actually include AI as part of their curriculum. The same OECD brief also indicates that "most workers exposed to AI will not need to acquire specific skills in AI engineering; however, they need to acquire general skills in AI literacy to be able to use, question, and cooperate with AI systems." This is precisely why "capability is no longer a training issue; it is now an execution issue."

 

Why Workforce Enablement is a different model

Meanwhile, the enterprise is also shifting to a more skills-first logic. The OECD describes a skills-first logic as “the development of the workforce based on demonstrated skills and competencies rather than qualifications and job titles.” This is important in our context because the job title is becoming a less and less useful indicator of what the enterprise needs. The impact of AI is happening at the task and workflow level. Workforce Enablement needs to be focused on the end-to-end workflow and task chains, not the job silo.

 

That is the key difference between Learning and Development and Workforce Enablement. Learning is still fundamentally about supporting people to gain knowledge and skills. Workforce Enablement is bigger than that. It also includes workflow and task chain definition, embedded guidance within the flow of work, readiness and certification schemes, policy and compliance integration, governed AI execution, and telemetry to monitor whether real work is being done to standard.

 

This is also why Learning and Development should not die with the advent of AI. It should evolve. McKinsey argues that the chief learning officer is best positioned to lead because of their understanding of the capabilities that employees must have to execute the business strategy and their ability to design a system where work and learning are integrated. Gartner makes a similar argument that the HR function is best positioned to work with the IT and business leaders to ensure that AI investments are driving business outcomes and that the business is able to see the talent implications of work redesigns. The most defensible argument is not that L&D owns everything. It is that L&D owns the capability and readiness piece of a cross-functional Workforce Enablement model that also includes operations, IT, security, compliance, and frontline leadership.

 

Governed AI execution changes the boundary

This cross-functional need becomes imperative once the AI begins executing within the workflow, rather than merely offering advice from the sidelines. Microsoft takes a firm stance here: authorization must be implemented using identity management, not language models. The company's advice is clear: prompt-level instructions cannot be used for validating role memberships, determining deterministic access decisions, or generating authorization results. In short, there is a world of difference between AI assistance and AI execution. Assistance is there to help the human think. Execution is there to execute within the system. Once the AI accesses the data, initiates the workflow, or calls the API, authorization is an enterprise concern, not a prompt-writer's task.

 

This is why a concept of authorized AI execution is a fundamental concept of Workforce Enablement. This means that AI execution is bounded by role, permissions, policy, and approval logic, as well as being auditable. According to Microsoft, secure agent execution needs to be within the "identity boundary" of the requesting user and produce "audit trails" for security and compliance review. The rest of the operating discipline comes from Microsoft's governance maturity model: "identity," "data access," and "compliance" enforced by default; "agent observability" via logs, telemetry, and review; "human oversight" explicitly; and "decision rights" as autonomy is increased. The observability aspect of the concept is made concrete in Microsoft's Purview guidance by requiring "audit logs" for both administrative actions and user interactions with agents, as well as "transcript" control and retention policies. The same concept is reinforced in the NIST Generative AI Profile by requiring "documented human oversight roles and responsibilities," "monitoring and documentation of override," and "post-deployment monitoring plans" that include "appeal" and "override" mechanisms.

 

Readiness and measurement have to change

Once capability is created in all of those domains, the way that enterprise measurement must change is also described by CDC, which is very clear that the way to measure the effectiveness of training is by measuring the learning and the transfer of that learning, and that learner satisfaction is not a measure of the effectiveness of the training. They also say that the way to really tell if the training is effective is to wait until later, because that is the best way to find out if the person was able to apply what they learned in the job. NIOSH provides an additional perspective that is very operationally relevant: "Structured training accelerates learning, reduces time to learn, improves quality, and reduces variability in task performance compared to trial and error." Those are relevant because they suggest a better way to measure Workforce Enablement.

 

This leads to an even clearer definition of readiness. In a Workforce Enablement approach, readiness is not defined by completion of a course. Rather, it is defined by ability to perform to standard in live operations, including ability to utilize embedded guidance and AI safely, identify exceptions, escalate appropriately, and maintain compliant performance. The work on AI skills carried out by the OECD supports this approach in that it focuses on general AI literacy among most workers, not just training among AI specialists. Thus, readiness is the link between human capability and system capability. An employee may be aware of the policy but not be able to operate in a system where policy, guidance, and AI action are in play in real-time.

 

Modern capability still requires human judgment

That doesn’t mean that human expertise is any less important. It means that it will change. According to analysis by the World Economic Forum, “the nature of workforce change needs to be systemic, organizations should use intelligent automation and AI agents to automate repetitive work, and organizations should rethink their jobs to ensure a good balance of machine execution and human oversight.” This means, in reality, “organizations should stop asking their workers to memorize everything and start ensuring their workers can oversee, intervene, and decide well when it really counts.”

 

There is, however, a significant risk involved if it is not properly addressed. This is because, according to the literature on automation bias, over-reliance on AI recommendations is an actual phenomenon, and recent research on deskilling suggests that it can occur at a structural level if environments promote it without supporting critical judgment. This is why Workforce Enablement cannot become synonymous with deskilling. It has to maintain the capacities that are relevant to failure modes, judgment, exception recognition, escalation discipline, and the ability to counter the machine. If it does not, then it is not Workforce Enablement; it is the hollowing out of capability.

 

Conclusion

The strategic implication is therefore immediately apparent: rather than needing to transform Learning and Development by adding further AI capabilities to the learning layer, the enterprise has to transform capability around the workflow. This means mapping task chains end-to-end, identifying where AI is already being introduced into execution, determining where human judgment is needed, and building a readiness model based on how people and machines now work together. Learning and Development is just as important, but only insofar as it can transform itself from a training provider to a workforce enablement architect. In an AI-mediated enterprise, capability is no longer something people attend; it's something the enterprise designs.

Next Up: AI-Enabled Learning Product Strategy

bottom of page