How leaders can harness transparency to balance AI fear and excitement

Credit: Outlever

Key Points

  • Leaders faced a two-sided challenge in AI adoption: managing fearful “holdouts” and guiding “enthusiasts” who risk using unsanctioned “Shadow AI.”

  • Dr. Jeffrey Allan, Director of the Institute for Responsible Technology and Artificial Intelligence at Nazareth University, argues that transparency is the critical factor for success.

  • Proactively sharing the “why” behind an AI strategy reframes risk, eases fear, and can transform skeptics into advocates.

  • Allan stresses that creating a culture of trust through continuous communication is the real competitive advantage.

If we're not transparent with our workforce, they're going to make up their own narrative. If we get in front of that by explaining what our motivations are, what the benefits are, and what our plans are overall, it's going to lay the groundwork and make it easier for adoption to take place successfully.

Dr. Jeffrey Allan

Director of the Institute for Responsible Technology and Artificial Intelligence
Nazareth University

Leaders bringing AI into the enterprise are juggling two very different challenges. On one side stand the holdouts, cautious employees whose resistance can drag down innovation. On the other are the enthusiasts, racing ahead with unsanctioned “Shadow AI” experiments that put security and privacy at risk. The missing ingredient is not another slide deck or strategy document; it’s transparency. Without a clear, people-first story from leadership, employees will invent one of their own.

The lesson comes from Dr. Jeffrey Allan, Director of the Institute for Responsible Technology and Artificial Intelligence at Nazareth University. His career spans founding two Silicon Valley startups, authoring the bestseller Writing AI Prompts for Dummies, and serving as a U.S. Marine before earning his Ph.D. in International Business. With that mix of entrepreneurial, academic, and leadership experience, Allan argued that transparency is the critical factor that determines whether AI adoption succeeds or stalls.

  • Owning the narrative: In moments of uncertainty, Allan explained, employees tend to default to the worst-case scenario, often rooted in fears of job loss. “If we’re not transparent with our workforce, they’re going to make up their own narrative,” he said. “If we get in front of that by explaining what our motivations are, what the benefits are, and what our plans are overall, it’s going to lay the groundwork and make it easier for adoption to take place successfully.”

The first step, Allan advised, is to run an audit to see who is already using AI and how. That baseline shows both the skills on hand and how open the team is to new tools. More often than not, the audit uncovers a pocket of holdouts, employees whose hesitation comes from a fear that feels far more immediate than past tech shifts. Allan stressed that this fear is not only real but reasonable, admitting he would feel the same if his own role seemed at risk. The antidote, he said, is to “win hearts and minds” by framing AI as a tool for empowerment. Done well, that shift in framing can turn skeptics into unexpected champions.

  • From holdout to advocate: “I sat with a CEO of a mid-sized company about two weeks ago, and he told me his invoicing person was one of the strongest holdouts. She resisted AI because she worried automation might push her out of a job. But after the first few automations were introduced in her area, she came back asking when the rest could be automated. She was thrilled to leave the drudgery behind.”

  • Reframing the risk: “We’re at a point where if you don’t use AI, you’re more likely to be replaced by someone who can use AI,” he said. “That’s the reality of where we are at the moment.” It’s a stark reminder that the bigger danger may not be machines taking jobs, but people falling behind.

On either side—whether it's calming the holdouts or reining in the enthusiasts—transparency is what makes the difference. When people understand your plans and the reasons behind them, you are far more likely to win buy-in across the organization.

Dr. Jeffrey Allan

Director of the Institute for Responsible Technology and Artificial Intelligence
Nazareth University

Transparency is also critical for managing the enthusiasts operating in what Allan called “frontier mode,” where they want to use every latest and greatest AI tool without regard to the potential risks. When leadership imposes guardrails without explanation, these motivated employees may assume the company is simply afraid of innovation. Allan pointed to the cautionary tale of Samsung engineers who learned the hard way that uploading technical specifications into a public model makes it part of the public domain. By proactively explaining the ‘why’ behind the rules, leaders can foster buy-in and prevent resentment.

  • A strategy for both sides: “It helps employees feel they are being treated as responsible adults who can be trusted with this type of information, and they value that transparency. On either side—whether it’s calming the holdouts or reining in the enthusiasts—transparency is what makes the difference. When people understand your plans and the reasons behind them, you are far more likely to win buy-in across the organization.”

Once that initial buy-in is achieved, transparency must evolve from a top-down announcement into a self-sustaining cultural practice. This is where “evangelists” become critical: employees whose passion for the technology is infectious. Allan said that the true value of these evangelists lies in their infectious personality and curiosity, qualities he considered more elusive and essential for driving peer-to-peer adoption than teachable technical skills. He noted that these champions can come from surprising places, debunking common stereotypes about age and tech-savviness. This peer-driven enthusiasm must be supported by a formal, ongoing commitment from leadership to maintain momentum.

  • A cultural commitment: “Don’t treat AI adoption as a one-time rollout. Hold regular monthly meetings to review progress, share lessons learned, and gather feedback from staff on how to improve. The people using AI in their daily work are often the ones who can spot gaps and opportunities that managers might miss.”

The real payoff for transparency is trust. When employees believe in leadership’s intentions, adoption speeds up, anxieties calm down, and enthusiasm finds a safe outlet. As technology races ahead, the gap between adopters and laggards will only grow, making continuous development non-negotiable. The tech itself is just table stakes; the true competitive edge comes from investing in a culture of trust.

TL;DR

  • Leaders faced a two-sided challenge in AI adoption: managing fearful “holdouts” and guiding “enthusiasts” who risk using unsanctioned “Shadow AI.”

  • Dr. Jeffrey Allan, Director of the Institute for Responsible Technology and Artificial Intelligence at Nazareth University, argues that transparency is the critical factor for success.

  • Proactively sharing the “why” behind an AI strategy reframes risk, eases fear, and can transform skeptics into advocates.

  • Allan stresses that creating a culture of trust through continuous communication is the real competitive advantage.

If we’re not transparent with our workforce, they’re going to make up their own narrative. If we get in front of that by explaining what our motivations are, what the benefits are, and what our plans are overall, it’s going to lay the groundwork and make it easier for adoption to take place successfully.

Dr. Jeffrey Allan

Nazareth University

Director of the Institute for Responsible Technology and Artificial Intelligence

If we're not transparent with our workforce, they're going to make up their own narrative. If we get in front of that by explaining what our motivations are, what the benefits are, and what our plans are overall, it's going to lay the groundwork and make it easier for adoption to take place successfully.
Dr. Jeffrey Allan
Nazareth University

Director of the Institute for Responsible Technology and Artificial Intelligence

Leaders bringing AI into the enterprise are juggling two very different challenges. On one side stand the holdouts, cautious employees whose resistance can drag down innovation. On the other are the enthusiasts, racing ahead with unsanctioned “Shadow AI” experiments that put security and privacy at risk. The missing ingredient is not another slide deck or strategy document; it’s transparency. Without a clear, people-first story from leadership, employees will invent one of their own.

The lesson comes from Dr. Jeffrey Allan, Director of the Institute for Responsible Technology and Artificial Intelligence at Nazareth University. His career spans founding two Silicon Valley startups, authoring the bestseller Writing AI Prompts for Dummies, and serving as a U.S. Marine before earning his Ph.D. in International Business. With that mix of entrepreneurial, academic, and leadership experience, Allan argued that transparency is the critical factor that determines whether AI adoption succeeds or stalls.

  • Owning the narrative: In moments of uncertainty, Allan explained, employees tend to default to the worst-case scenario, often rooted in fears of job loss. “If we’re not transparent with our workforce, they’re going to make up their own narrative,” he said. “If we get in front of that by explaining what our motivations are, what the benefits are, and what our plans are overall, it’s going to lay the groundwork and make it easier for adoption to take place successfully.”

The first step, Allan advised, is to run an audit to see who is already using AI and how. That baseline shows both the skills on hand and how open the team is to new tools. More often than not, the audit uncovers a pocket of holdouts, employees whose hesitation comes from a fear that feels far more immediate than past tech shifts. Allan stressed that this fear is not only real but reasonable, admitting he would feel the same if his own role seemed at risk. The antidote, he said, is to “win hearts and minds” by framing AI as a tool for empowerment. Done well, that shift in framing can turn skeptics into unexpected champions.

  • From holdout to advocate: “I sat with a CEO of a mid-sized company about two weeks ago, and he told me his invoicing person was one of the strongest holdouts. She resisted AI because she worried automation might push her out of a job. But after the first few automations were introduced in her area, she came back asking when the rest could be automated. She was thrilled to leave the drudgery behind.”

  • Reframing the risk: “We’re at a point where if you don’t use AI, you’re more likely to be replaced by someone who can use AI,” he said. “That’s the reality of where we are at the moment.” It’s a stark reminder that the bigger danger may not be machines taking jobs, but people falling behind.

On either side—whether it’s calming the holdouts or reining in the enthusiasts—transparency is what makes the difference. When people understand your plans and the reasons behind them, you are far more likely to win buy-in across the organization.

Dr. Jeffrey Allan

Nazareth University

Director of the Institute for Responsible Technology and Artificial Intelligence

On either side—whether it's calming the holdouts or reining in the enthusiasts—transparency is what makes the difference. When people understand your plans and the reasons behind them, you are far more likely to win buy-in across the organization.
Dr. Jeffrey Allan
Nazareth University

Director of the Institute for Responsible Technology and Artificial Intelligence

Transparency is also critical for managing the enthusiasts operating in what Allan called “frontier mode,” where they want to use every latest and greatest AI tool without regard to the potential risks. When leadership imposes guardrails without explanation, these motivated employees may assume the company is simply afraid of innovation. Allan pointed to the cautionary tale of Samsung engineers who learned the hard way that uploading technical specifications into a public model makes it part of the public domain. By proactively explaining the ‘why’ behind the rules, leaders can foster buy-in and prevent resentment.

  • A strategy for both sides: “It helps employees feel they are being treated as responsible adults who can be trusted with this type of information, and they value that transparency. On either side—whether it’s calming the holdouts or reining in the enthusiasts—transparency is what makes the difference. When people understand your plans and the reasons behind them, you are far more likely to win buy-in across the organization.”

Once that initial buy-in is achieved, transparency must evolve from a top-down announcement into a self-sustaining cultural practice. This is where “evangelists” become critical: employees whose passion for the technology is infectious. Allan said that the true value of these evangelists lies in their infectious personality and curiosity, qualities he considered more elusive and essential for driving peer-to-peer adoption than teachable technical skills. He noted that these champions can come from surprising places, debunking common stereotypes about age and tech-savviness. This peer-driven enthusiasm must be supported by a formal, ongoing commitment from leadership to maintain momentum.

  • A cultural commitment: “Don’t treat AI adoption as a one-time rollout. Hold regular monthly meetings to review progress, share lessons learned, and gather feedback from staff on how to improve. The people using AI in their daily work are often the ones who can spot gaps and opportunities that managers might miss.”

The real payoff for transparency is trust. When employees believe in leadership’s intentions, adoption speeds up, anxieties calm down, and enthusiasm finds a safe outlet. As technology races ahead, the gap between adopters and laggards will only grow, making continuous development non-negotiable. The tech itself is just table stakes; the true competitive edge comes from investing in a culture of trust.