No target, no win: why successful AI people projects start at the finish line and work backwards

Credit: Outlever

Key Points

  • Companies often rush into AI projects without clear goals, leading to wasted resources and failed initiatives.

  • Amanda Hagman of Atana emphasizes starting AI projects with specific business outcomes to ensure success.

  • Quality assurance has become more challenging as non-experts may overlook errors.

  • Effective AI project management involves continuous oversight and risk assessment to prevent costly mistakes.

If your AI project doesn't start with a clearly defined endpoint or land on a KPI, it's already starting off course. When you focus on just what a cool, shiny tool can do, you miss your target because you weren't ever aiming at it to begin with.

Amanda Hagman, PhD

Chief Scientific Officer
Atana

In the race to adopt AI, most companies are sprinting ahead without a clear finish line in sight. The result is often chaos, as companies—guided by hype instead of KPIs—pour resources into initiatives that are destined to go nowhere.

Dr. Amanda Hagman is the Chief Scientific Officer at Atana, a corporate training platform focused on measurable behavior change. She’s built her approach around a simple principle: the most successful AI projects start with the outcome and work backward.

Shooting blind: Hagman’s core advice flips the typical project plan on its head. She argues that success depends on backwards planning, where every initiative begins with a specific business outcome. “If your AI project doesn’t start with a clearly defined endpoint or land on a KPI, it’s already starting off course,” she says. “When you focus on just what a cool, shiny tool can do, you miss your target because you weren’t ever aiming at it to begin with.”

Connecting the dots: The real power of this approach emerges when AI is aimed at problems that require a bird’s-eye view. It’s one thing to analyze customer support tickets; it’s another to see how they correlate with sales call transcripts and product usage data. “It’s easy to analyze those independently, but it’s hard to bring them together,” says Hagman. AI excels at spotting the patterns that emerge across these silos—the kind of holistic insight that was once the exclusive domain of expert consultants.

The QA process has actually gotten harder now that AI is ubiquitous. When outputs sound fluent or function as expected, it is easy to miss when something isn't correct. Without intentional validation steps, even experienced teams can miss subtle errors that look right, but aren't.

Dr. Amanda Hagman

Chief Scientific Officer
Atana

Right under your nose: When practitioners had a deeper grasp of the algorithms, they could catch problems in real time. Now that anyone can use the tools, spotting what’s gone wrong has never been harder. “The QA process has actually gotten harder now that AI is ubiquitous,” Hagman warns. “When outputs sound fluent or function as expected, it is easy to miss when something isn’t correct. Without intentional validation steps, even experienced teams can miss subtle errors that look right, but aren’t.”

No rhyme or reason: “AI models aren’t reasoning, like humans. They’re high-powered pattern recognizers,” explains Hagman. “Large language models might feel like it’s finishing your sentence or thoughts, but really, they’re predicting what’s most likely to come next based on patterns in huge datasets. That’s why you can’t just launch a model and walk away.” Systems must be in place to validate outputs, monitor for drift, and course-correct when results veer off course.

Plan now or pay later: Ultimately, managing risk isn’t a separate step; it’s woven directly into the fabric of backwards planning. By starting with a clear destination, teams can map the entire journey and anticipate where the wrong turns might happen. “You have to be deliberate and identify where the risks lie in your pipeline,” Hagman advises. “You have to understand how impactful a wrong turn will be, because that tells you exactly what to tackle first.” It’s the difference between building guardrails from the start and only realizing you need them after you’ve already gone off course.

TL;DR

  • Companies often rush into AI projects without clear goals, leading to wasted resources and failed initiatives.

  • Amanda Hagman of Atana emphasizes starting AI projects with specific business outcomes to ensure success.

  • Quality assurance has become more challenging as non-experts may overlook errors.

  • Effective AI project management involves continuous oversight and risk assessment to prevent costly mistakes.

If your AI project doesn’t start with a clearly defined endpoint or land on a KPI, it’s already starting off course. When you focus on just what a cool, shiny tool can do, you miss your target because you weren’t ever aiming at it to begin with.

Amanda Hagman, PhD

Atana

Chief Scientific Officer

If your AI project doesn't start with a clearly defined endpoint or land on a KPI, it's already starting off course. When you focus on just what a cool, shiny tool can do, you miss your target because you weren't ever aiming at it to begin with.
Amanda Hagman, PhD
Atana

Chief Scientific Officer

In the race to adopt AI, most companies are sprinting ahead without a clear finish line in sight. The result is often chaos, as companies—guided by hype instead of KPIs—pour resources into initiatives that are destined to go nowhere.

Dr. Amanda Hagman is the Chief Scientific Officer at Atana, a corporate training platform focused on measurable behavior change. She’s built her approach around a simple principle: the most successful AI projects start with the outcome and work backward.

Shooting blind: Hagman’s core advice flips the typical project plan on its head. She argues that success depends on backwards planning, where every initiative begins with a specific business outcome. “If your AI project doesn’t start with a clearly defined endpoint or land on a KPI, it’s already starting off course,” she says. “When you focus on just what a cool, shiny tool can do, you miss your target because you weren’t ever aiming at it to begin with.”

Connecting the dots: The real power of this approach emerges when AI is aimed at problems that require a bird’s-eye view. It’s one thing to analyze customer support tickets; it’s another to see how they correlate with sales call transcripts and product usage data. “It’s easy to analyze those independently, but it’s hard to bring them together,” says Hagman. AI excels at spotting the patterns that emerge across these silos—the kind of holistic insight that was once the exclusive domain of expert consultants.

The QA process has actually gotten harder now that AI is ubiquitous. When outputs sound fluent or function as expected, it is easy to miss when something isn’t correct. Without intentional validation steps, even experienced teams can miss subtle errors that look right, but aren’t.

Dr. Amanda Hagman

Atana

Chief Scientific Officer

The QA process has actually gotten harder now that AI is ubiquitous. When outputs sound fluent or function as expected, it is easy to miss when something isn't correct. Without intentional validation steps, even experienced teams can miss subtle errors that look right, but aren't.
Dr. Amanda Hagman
Atana

Chief Scientific Officer

Right under your nose: When practitioners had a deeper grasp of the algorithms, they could catch problems in real time. Now that anyone can use the tools, spotting what’s gone wrong has never been harder. “The QA process has actually gotten harder now that AI is ubiquitous,” Hagman warns. “When outputs sound fluent or function as expected, it is easy to miss when something isn’t correct. Without intentional validation steps, even experienced teams can miss subtle errors that look right, but aren’t.”

No rhyme or reason: “AI models aren’t reasoning, like humans. They’re high-powered pattern recognizers,” explains Hagman. “Large language models might feel like it’s finishing your sentence or thoughts, but really, they’re predicting what’s most likely to come next based on patterns in huge datasets. That’s why you can’t just launch a model and walk away.” Systems must be in place to validate outputs, monitor for drift, and course-correct when results veer off course.

Plan now or pay later: Ultimately, managing risk isn’t a separate step; it’s woven directly into the fabric of backwards planning. By starting with a clear destination, teams can map the entire journey and anticipate where the wrong turns might happen. “You have to be deliberate and identify where the risks lie in your pipeline,” Hagman advises. “You have to understand how impactful a wrong turn will be, because that tells you exactly what to tackle first.” It’s the difference between building guardrails from the start and only realizing you need them after you’ve already gone off course.