Bridging the Modern Workplace Trust Gaps: with Under Armour Sr. Director Sunita Braynard

Credit: Outlever

Key Points

  • HR leaders face a new challenge as teams shift between AI skepticism and blind trust.

  • Sunita Braynard, Sr. Director of Global Compensation and Mobility at Under Armour, explains that AI should support rather than replace human judgment.

  • She advises leaders to build balanced trust through small experiments for the cautious and stronger verification habits for the overconfident.

I think the trust gap comes from how close you are to the work. The more you live in the details, the more you see where AI helps and where it falls short. Some people hesitate because they’re fearful or simply haven’t used it, and for them, the answer is exposure. But others trust it too quickly, relying on whatever it produces without checking the data or understanding the process, and that’s just as risky.

Sunita Braynard

Senior Director, Global Compensation & Mobility
Under Armour

For HR leaders, the era of simple AI rollouts is over. The pressure to adopt has been replaced by a tougher challenge: teaching balance. Teams need to learn how to trust the technology without surrendering to it, how to stay curious without getting careless, and how to know when a machine’s answer needs a human’s second look.

Sunita Braynard approaches data like a conversation, not a calculation. As Senior Director of Global Compensation and Mobility at Under Armour, she helps executives make choices that connect business strategy with human reality. With previous Director roles at PayPal and Novavax Inc., Sunita Braynard has seen how fast work can evolve. She now helps leaders find the balance between human judgment and technological speed.

“I think the trust gap comes from how close you are to the work. The more you live in the details, the more you see where AI helps and where it falls short. Some people hesitate because they’re fearful or simply haven’t used it, and for them, the answer is exposure. But others trust it too quickly, relying on whatever it produces without checking the data or understanding the process, and that’s just as risky,” says Braynard.

  • Perspective from proximity: Braynard suggests the real divide is less about demographics and more about an employee’s proximity to the details of the job. “The closer you are to detailed analytical work, the easier it is to see where AI makes mistakes or misses context. Those focused on higher-level strategy often see only the polished output, not the process behind it, so they may not recognize its limits,” she says.

  • Human at the helm: According to her, responsible AI use begins with remembering who’s in charge. AI is a powerful assistant, but it’s still just an assistant. “Some people don’t trust AI because they’re afraid it will replace their jobs, and that fear isn’t unfounded. It is happening, whether we talk about it or not. The answer is not denial or resistance but upskilling, showing how these tools can make us more efficient and improve performance and outcomes,” Braynard explains. “I encourage them to open their minds to the possibilities.”

That optimism comes with a reality check. Curiosity is a good start, but trust has to be earned. Braynard believes confidence with AI begins when people stop treating it like magic and start treating it like any other coworker whose work needs to be reviewed. She calls this skill “validation literacy,” the habit of checking the work before reaching a conclusion.

  • Trust, but verify: “When I use an AI tool for research, I always check the sources it’s drawing from and read them myself to see if they’re relevant and up to date. In analysis, I’ll run a sample check to confirm the results match what I’d get doing it by hand. It’s about understanding where the numbers come from before trusting what they say.”

Her takeaway is simple. Leaders need to find balance by addressing both the skeptics and the overconfident. For those hesitant to use AI, familiarity is the cure. Braynard recommends small, low-stakes experiments to demystify the technology and build AI confidence. And for those who rush in too quickly, she says the answer is reflection. Teams need to slow down, question the output, verify the data, and understand what the tool can and cannot do.

  • Confidence through Copilot: “Small steps are always good steps toward more adoption. For example, people can use tools like Google or Copilot for the work they’re already doing. This helps them get to a point of understanding both the capabilities and the limitations of the application,” Braynard explains.

  • A pause for the reckless: But for the employees who are too eager to trust AI, Braynard says it’s time for a reality check. “Those who want to trust AI blindly need to pause. They need to look at the data and the analysis being done behind it so that they’re 100% sure of the accuracy of the results.”

For Braynard, the real solution lives in culture, not code. She believes trust grows faster through people than through programs. Most hesitation, she says, comes from not knowing what’s possible, and nothing builds confidence like seeing it firsthand.

“Having people who are using AI talk to people who are not helps ease fear, because sometimes the issue isn’t just fear, it’s that people don’t know what they don’t know. When we see trusted peers or communities using these tools to achieve real outcomes, adoption becomes much easier,” she concludes.

Those who want to trust AI blindly need to pause. They need to look at the data and the analysis being done behind it so that they're 100% sure of the accuracy of the results.

Sunita Braynard

Senior Director of Global Compensation and Mobility
Under Armour

Those who want to trust AI blindly need to pause. They need to look at the data and the analysis being done behind it so that they're 100% sure of the accuracy of the results.

Sunita Braynard

Senior Director of Global Compensation and Mobility
Under Armour

Related articles

TL;DR

  • HR leaders face a new challenge as teams shift between AI skepticism and blind trust.

  • Sunita Braynard, Sr. Director of Global Compensation and Mobility at Under Armour, explains that AI should support rather than replace human judgment.

  • She advises leaders to build balanced trust through small experiments for the cautious and stronger verification habits for the overconfident.

I think the trust gap comes from how close you are to the work. The more you live in the details, the more you see where AI helps and where it falls short. Some people hesitate because they’re fearful or simply haven’t used it, and for them, the answer is exposure. But others trust it too quickly, relying on whatever it produces without checking the data or understanding the process, and that’s just as risky.

Sunita Braynard

Under Armour

Senior Director, Global Compensation & Mobility

I think the trust gap comes from how close you are to the work. The more you live in the details, the more you see where AI helps and where it falls short. Some people hesitate because they’re fearful or simply haven’t used it, and for them, the answer is exposure. But others trust it too quickly, relying on whatever it produces without checking the data or understanding the process, and that’s just as risky.
Sunita Braynard
Under Armour

Senior Director, Global Compensation & Mobility

For HR leaders, the era of simple AI rollouts is over. The pressure to adopt has been replaced by a tougher challenge: teaching balance. Teams need to learn how to trust the technology without surrendering to it, how to stay curious without getting careless, and how to know when a machine’s answer needs a human’s second look.

Sunita Braynard approaches data like a conversation, not a calculation. As Senior Director of Global Compensation and Mobility at Under Armour, she helps executives make choices that connect business strategy with human reality. With previous Director roles at PayPal and Novavax Inc., Sunita Braynard has seen how fast work can evolve. She now helps leaders find the balance between human judgment and technological speed.

“I think the trust gap comes from how close you are to the work. The more you live in the details, the more you see where AI helps and where it falls short. Some people hesitate because they’re fearful or simply haven’t used it, and for them, the answer is exposure. But others trust it too quickly, relying on whatever it produces without checking the data or understanding the process, and that’s just as risky,” says Braynard.

  • Perspective from proximity: Braynard suggests the real divide is less about demographics and more about an employee’s proximity to the details of the job. “The closer you are to detailed analytical work, the easier it is to see where AI makes mistakes or misses context. Those focused on higher-level strategy often see only the polished output, not the process behind it, so they may not recognize its limits,” she says.

  • Human at the helm: According to her, responsible AI use begins with remembering who’s in charge. AI is a powerful assistant, but it’s still just an assistant. “Some people don’t trust AI because they’re afraid it will replace their jobs, and that fear isn’t unfounded. It is happening, whether we talk about it or not. The answer is not denial or resistance but upskilling, showing how these tools can make us more efficient and improve performance and outcomes,” Braynard explains. “I encourage them to open their minds to the possibilities.”

That optimism comes with a reality check. Curiosity is a good start, but trust has to be earned. Braynard believes confidence with AI begins when people stop treating it like magic and start treating it like any other coworker whose work needs to be reviewed. She calls this skill “validation literacy,” the habit of checking the work before reaching a conclusion.

  • Trust, but verify: “When I use an AI tool for research, I always check the sources it’s drawing from and read them myself to see if they’re relevant and up to date. In analysis, I’ll run a sample check to confirm the results match what I’d get doing it by hand. It’s about understanding where the numbers come from before trusting what they say.”

Those who want to trust AI blindly need to pause. They need to look at the data and the analysis being done behind it so that they’re 100% sure of the accuracy of the results.

Sunita Braynard

Under Armour

Senior Director of Global Compensation and Mobility

Those who want to trust AI blindly need to pause. They need to look at the data and the analysis being done behind it so that they're 100% sure of the accuracy of the results.
Sunita Braynard
Under Armour

Senior Director of Global Compensation and Mobility

Her takeaway is simple. Leaders need to find balance by addressing both the skeptics and the overconfident. For those hesitant to use AI, familiarity is the cure. Braynard recommends small, low-stakes experiments to demystify the technology and build AI confidence. And for those who rush in too quickly, she says the answer is reflection. Teams need to slow down, question the output, verify the data, and understand what the tool can and cannot do.

  • Confidence through Copilot: “Small steps are always good steps toward more adoption. For example, people can use tools like Google or Copilot for the work they’re already doing. This helps them get to a point of understanding both the capabilities and the limitations of the application,” Braynard explains.

  • A pause for the reckless: But for the employees who are too eager to trust AI, Braynard says it’s time for a reality check. “Those who want to trust AI blindly need to pause. They need to look at the data and the analysis being done behind it so that they’re 100% sure of the accuracy of the results.”

For Braynard, the real solution lives in culture, not code. She believes trust grows faster through people than through programs. Most hesitation, she says, comes from not knowing what’s possible, and nothing builds confidence like seeing it firsthand.

“Having people who are using AI talk to people who are not helps ease fear, because sometimes the issue isn’t just fear, it’s that people don’t know what they don’t know. When we see trusted peers or communities using these tools to achieve real outcomes, adoption becomes much easier,” she concludes.