Thinking

4 min readEssay

AI doesn’t fail because of the model. It fails in the workflow.

Enterprise AI succeeds when product leaders focus less on model debates and more on workflow fit, trust, enablement, and the systems that make adoption real.

AI StrategyOperating ModelProduct Leadership

Ask about this page

Get a grounded read on responsibility, evidence, impact, or what to read next.

Most AI strategies fail before they start

Most AI strategies fail before they start because they turn a technology choice into the strategy. Teams spend weeks comparing model quality, vendor roadmaps, and benchmark scores before they have identified the actual work that needs to improve.

In enterprise settings, that is backwards. The model matters, but it is rarely the main reason something succeeds or fails. The real questions are where AI fits in the workflow, what people will trust, how risk is handled, and what has to be true for usage to become repeatable instead of theatrical.

What starting with the model actually means

Starting with the model is not just arguing about GPT versus Claude. It is the broader habit of making the technology choice the center of the strategy before the workflow, measurement plan, trust requirements, and operating model are clear.

That approach feels concrete, but it usually postpones the harder product decisions. Models can be swapped. Bad workflow fit, weak review design, fuzzy ownership, and missing adoption systems are much harder to repair after people have already formed habits around the wrong setup.

Start with real work

When I helped lead early ChatGPT rollout work in the contact center, the starting question was simple: where are agents spending time, where is quality inconsistent, and what could we improve safely inside a real workflow?

That led us toward concrete jobs like pre-call preparation, drafting, and other context-heavy tasks with clear operational value. Starting there made the pilot measurable, grounded, and safe enough to learn from. Small pilots are useful when they reveal something durable: which moments benefit from AI, where human review has to stay, what prompt patterns hold up, and which workflows are actually worth scaling.

Adoption is the product

Shipping access is not the same as shipping adoption. Usage grows when people trust the output, understand the boundaries, and can see how AI helps them do real work faster or better.

As ChatGPT Champions Lead, I treated enablement as part of the product, not as launch support. That meant reusable prompting patterns, training, examples, feedback loops, and a champions network that helped strong workflows spread across the organization. Champions were not a side program. They were part of how adoption became real.

Governance belongs in the design

In real enterprise AI work, governance is not an afterthought. Legal review, security boundaries, data handling rules, approval paths, and risk tiers all shape what the system can responsibly do.

The goal is not to add control for its own sake. It is to design guardrails that are clear enough for leadership to trust and practical enough for teams to use. If governance lives outside the workflow, people route around it. If it is part of the workflow, scale gets much more realistic.

The real product layer

The most important AI product work usually sits in the middle layer between the model and the business system. This is where prompt patterns, tooling, integrations, context handling, permissions, feedback loops, and review processes come together.

The model generates an output. The middle layer determines whether that output has the right context, reaches the right tool, follows the right review path, and lands inside a workflow a business can actually rely on.

That is why enterprise AI work starts to look much closer to systems product work than prompt experimentation. The product is not the model alone. The product is the system around the model. If that layer is weak, even a strong model produces brittle adoption. If that layer is well designed, model improvements become much easier to absorb.

My operating principle

I keep coming back to the same sequence: start with the business outcome, map the workflow, then design the system around the model.

The model still matters. It is just not the center of gravity. If the workflow is weak, trust is low, or the operating layer is missing, a better model will not save the rollout.

What actually sticks

Enterprise AI becomes real when the work gets better, the boundaries are clear, and people know how to use the system with confidence.

The model can win the demo. The workflow decides whether anything actually sticks.

Contact

If this point of view feels aligned, let's talk.

The essays are here to make the operating model visible, not to pad the portfolio. Happy to go deeper in a conversation.