Reframing AI Adoption Strategy By Equipping Middle Managers To Lead Change

Credit: Yurii Karvatskyi

Key Points

  • While many companies have focused on AI as a strategic tool, adoption often stalls at the middle-management level due to operational friction.

  • Patricia Garland, founder of Culture Craft, explains that this hesitation stems from ambiguity and a perceived loss of control, not outright resistance.

  • Garland argues that AI amplifies existing weaknesses and that successful adoption requires clear governance and psychological safety.

  • She recommends a pre-flight check to map decision-making, clarify human override points, and define responsible use before implementation.

Fear around AI almost always surfaces with middle managers first. They’re being asked to deliver productivity gains while navigating capability gaps and unclear expectations around AI-supported work.

Patricia Garland

Founder
Culture Craft

AI adoption rarely stalls at the strategy level. It stalls in the middle of the organization, where managers are expected to deliver results while translating unclear AI mandates into real workflows. As expectations shift and accountability remains high, ambiguity creates a quiet but powerful drag on progress. What often appears to be resistance is usually something else entirely: managers navigating uncertainty about how success will be measured, how decisions will change, and whether experimentation will carry professional risk.

To get to the heart of the challenge we spoke to Patricia Garland, Founder of Culture Craft, a studio that designs leadership and AI adoption workshops for HR teams. Drawing on her deep experience in workforce stability and emergency response, she believes that one of the most significant pressure points in AI adoption is when leadership tasks middle management with implementing opaque strategies, technologies, and processes.

“Fear around AI almost always surfaces with middle managers first. They’re being asked to deliver productivity gains while navigating capability gaps and unclear expectations around AI-supported work. On the surface, it may look like resistance or hesitation. Underneath, it’s often ambiguity and a perceived loss of control,” says Garland. According to her, this pressure on the management layer comes from feeling exposed, as if their career will hinge on implementing programs they don’t quite understand. The vulnerability is compounded by the fact that middle managers often report the least psychological safety at work. The warning signs are subtle but telling: a quiet withdrawal from pilots, overly cautious compliance, or performance reviews that inadvertently penalize experimentation.

But Garland describes this fear as a symptom of a deeper issue: these employees aren’t sure they can succeed under the given criteria. If they experiment and it impacts their performance, they may worry about their jobs. And for Garland, this challenge is fundamentally an HR problem.

  • An unstable OS: Rushing with AI can make problems worse, not better, unless the organization has clearly defined strategies and needs. “When organizations start talking about AI before they’ve stabilized their HR fundamentals, that’s usually the first signal. If role clarity is inconsistent, performance standards are uneven, or decision rights are ambiguous, AI will amplify those weaknesses rather than solve them,” Garland explains.

  • Weaknesses, exposed: When HR isn’t clear on how roles should be redesigned or which skills need to grow, AI can exacerbate confusion, creating destabilizing change that erodes trust. “AI doesn’t fix a weak operating model,” Garland explains. “It exposes it.” In contrast, a more effective approach starts with a real business problem. From there, organizations analyze how workflows need to change and what managers need in order to lead differently. Without this groundwork, managers are left to grapple with AI’s operational challenges on their own.

  • Failure to launch: A core problem in launching AI HR projects is treating them as standalone software rather than solutions that need to be integrated into an evolving organizational culture and structure designed to meet them. “When pilots stall, it’s usually because the tool was introduced, but nothing around it was redesigned. Expectations didn’t change. Accountability didn’t change. Leaders didn’t model new behaviors. When the technology moves but the system around it stays the same, momentum fades.”

The solution, Garland advises, starts with establishing clear governance before AI influences people’s decisions. This framework serves as a safeguard against the risk of ungoverned Shadow AI and clarifies accountability. She suggests leaders ask themselves: Who ultimately owns the final human judgment? Who is accountable if an AI-informed decision leads to a negative outcome? Answering these questions often requires turning abstract AI principles into a practical, documented corporate AI policy with bias reviews and clear escalation paths.

But governance alone isn’t enough. The other half of the equation is building psychological safety. Garland defines operational psychological safety through a concrete set of structured permissions designed to make experimentation predictable and safe, powered by visible leadership behavior.

  • A license to learn: Put simply, Garland argues that for AI projects to be successful, employees have to feel safe to explore and learn. “Operational psychological safety means explicitly stating that testing AI tools won’t negatively impact performance evaluations, defining norms for disclosure of AI use, and creating dedicated forums where teams can surface unintended consequences or failures without penalty.”

So what’s the first practical step? Garland recommends conducting a structured AI readiness review before deploying new tools. Such a review focuses on building a strong and resilient foundation for successful AI adoption. Map where AI will alter decision-making, clarify human override points, identify capability gaps within the management layer, and define what responsible use actually means in your context. “Preparation isn’t about slowing adoption,” she concludes. “It’s about ensuring that adoption strengthens performance rather than destabilizing it.”

Related articles

Credit: Yurii Karvatskyi

TL;DR

  • While many companies have focused on AI as a strategic tool, adoption often stalls at the middle-management level due to operational friction.

  • Patricia Garland, founder of Culture Craft, explains that this hesitation stems from ambiguity and a perceived loss of control, not outright resistance.

  • Garland argues that AI amplifies existing weaknesses and that successful adoption requires clear governance and psychological safety.

  • She recommends a pre-flight check to map decision-making, clarify human override points, and define responsible use before implementation.

Fear around AI almost always surfaces with middle managers first. They’re being asked to deliver productivity gains while navigating capability gaps and unclear expectations around AI-supported work.

Patricia Garland

Culture Craft

Founder

Fear around AI almost always surfaces with middle managers first. They’re being asked to deliver productivity gains while navigating capability gaps and unclear expectations around AI-supported work.
Patricia Garland
Culture Craft

Founder

AI adoption rarely stalls at the strategy level. It stalls in the middle of the organization, where managers are expected to deliver results while translating unclear AI mandates into real workflows. As expectations shift and accountability remains high, ambiguity creates a quiet but powerful drag on progress. What often appears to be resistance is usually something else entirely: managers navigating uncertainty about how success will be measured, how decisions will change, and whether experimentation will carry professional risk.

To get to the heart of the challenge we spoke to Patricia Garland, Founder of Culture Craft, a studio that designs leadership and AI adoption workshops for HR teams. Drawing on her deep experience in workforce stability and emergency response, she believes that one of the most significant pressure points in AI adoption is when leadership tasks middle management with implementing opaque strategies, technologies, and processes.

“Fear around AI almost always surfaces with middle managers first. They’re being asked to deliver productivity gains while navigating capability gaps and unclear expectations around AI-supported work. On the surface, it may look like resistance or hesitation. Underneath, it’s often ambiguity and a perceived loss of control,” says Garland. According to her, this pressure on the management layer comes from feeling exposed, as if their career will hinge on implementing programs they don’t quite understand. The vulnerability is compounded by the fact that middle managers often report the least psychological safety at work. The warning signs are subtle but telling: a quiet withdrawal from pilots, overly cautious compliance, or performance reviews that inadvertently penalize experimentation.

But Garland describes this fear as a symptom of a deeper issue: these employees aren’t sure they can succeed under the given criteria. If they experiment and it impacts their performance, they may worry about their jobs. And for Garland, this challenge is fundamentally an HR problem.

  • An unstable OS: Rushing with AI can make problems worse, not better, unless the organization has clearly defined strategies and needs. “When organizations start talking about AI before they’ve stabilized their HR fundamentals, that’s usually the first signal. If role clarity is inconsistent, performance standards are uneven, or decision rights are ambiguous, AI will amplify those weaknesses rather than solve them,” Garland explains.

  • Weaknesses, exposed: When HR isn’t clear on how roles should be redesigned or which skills need to grow, AI can exacerbate confusion, creating destabilizing change that erodes trust. “AI doesn’t fix a weak operating model,” Garland explains. “It exposes it.” In contrast, a more effective approach starts with a real business problem. From there, organizations analyze how workflows need to change and what managers need in order to lead differently. Without this groundwork, managers are left to grapple with AI’s operational challenges on their own.

  • Failure to launch: A core problem in launching AI HR projects is treating them as standalone software rather than solutions that need to be integrated into an evolving organizational culture and structure designed to meet them. “When pilots stall, it’s usually because the tool was introduced, but nothing around it was redesigned. Expectations didn’t change. Accountability didn’t change. Leaders didn’t model new behaviors. When the technology moves but the system around it stays the same, momentum fades.”

The solution, Garland advises, starts with establishing clear governance before AI influences people’s decisions. This framework serves as a safeguard against the risk of ungoverned Shadow AI and clarifies accountability. She suggests leaders ask themselves: Who ultimately owns the final human judgment? Who is accountable if an AI-informed decision leads to a negative outcome? Answering these questions often requires turning abstract AI principles into a practical, documented corporate AI policy with bias reviews and clear escalation paths.

But governance alone isn’t enough. The other half of the equation is building psychological safety. Garland defines operational psychological safety through a concrete set of structured permissions designed to make experimentation predictable and safe, powered by visible leadership behavior.

  • A license to learn: Put simply, Garland argues that for AI projects to be successful, employees have to feel safe to explore and learn. “Operational psychological safety means explicitly stating that testing AI tools won’t negatively impact performance evaluations, defining norms for disclosure of AI use, and creating dedicated forums where teams can surface unintended consequences or failures without penalty.”

So what’s the first practical step? Garland recommends conducting a structured AI readiness review before deploying new tools. Such a review focuses on building a strong and resilient foundation for successful AI adoption. Map where AI will alter decision-making, clarify human override points, identify capability gaps within the management layer, and define what responsible use actually means in your context. “Preparation isn’t about slowing adoption,” she concludes. “It’s about ensuring that adoption strengthens performance rather than destabilizing it.”