AI Transformation
AI activity and speed doesn't automatically turn into impact
If your operating model can't absorb that speed and leverage it to build great products and experiences, you will see mediocre returns on AI investment. That's why 95% of organizations are seeing zero financial return from GenAI. I help leaders build organizations that can run at the speed of AI.
The Evidence
This isn't a few bad pilots. It's a systemic pattern.
The Real Problem
You don't have an AI strategy problem. You have an AI operating model problem.
What most organizations are doing
Tool rollouts. Vendor selection. Prompt training. Use-case lists. Top-down strategy decks. Disconnected pilots. This creates activity, not reliable value.
What's actually going on
AI initiatives behave like product development — uncertainty, iteration, learning loops. But organizations still manage them like traditional projects. Decision latency kills momentum. Funding friction starves promising bets. Nobody owns the business outcome — only the technical experiment.
This is what I call AI Theater: visible motion, impressive demos, zero value realization. The organization looks busy with AI. Leadership keeps asking "what did you do with my money?" And nobody has a good answer.
Sound Familiar?
Which of these is slowing you down?
These are the patterns I see most often in organizations spending real money on AI with little to show for it.
Your AI features and demos look great. But they don't get adopted, don't move the needle, and nobody is asking whether they actually create customer or business value.
Leadership changes AI priorities every quarter. Goals set in January are irrelevant by April. Teams work hard but pull in different directions because there's no lightweight way to adapt.
You have 20 AI initiatives. Nobody can say which ones matter. No shared language for stage, confidence, or evidence. No practical way to decide what deserves more investment and what should stop.
You bought AI tools and mandated rollout. Usage is patchy. The workflow hasn't actually changed. You can mandate usage, but you can't mandate value.
A Different Operating Model
What I see in the organizations that are getting real value
The ~10% getting significant returns from AI aren't doing more AI
They're managing AI investments differently. After 20 years working with organizations on product operating models and agility — and seeing the same patterns now playing out with AI — here's what separates the winners.
They see AI investments as a portfolio, not a pile of projects
They have a practical view across all AI bets — not just a list of use cases. They can see what's in flight, what's risky, and where the real leverage sits.
They tier ruthlessly
Only the few bets that could move the business get portfolio-level steering. Everything else runs with lightweight guardrails. Most organizations give everything the same attention, which means nothing gets enough.
They de-risk before they build
They use discovery to buy information before making major commitments. They treat early investment as buying options, not placing bets.
They share a language for confidence and evidence
Leadership, product, engineering, and finance all use the same language for outcomes, hypotheses, evidence, and commitment. No more "the tech team says it's going well" while the board asks "what did you do with my money?"
They earn adoption instead of mandating it
They treat internal adoption as a product challenge — designed around workflow value, not pushed through rollout plans. If people aren't pulling for it, the problem is the product, not the people.
They give sponsors something they can actually act on
Leadership has practical visibility into what should scale, what should stop, and where confidence is growing — without adding ceremony or reporting overhead.
Questions I Get
Things leaders ask when this resonates
We've already spent a lot on AI. Is the answer to slow down?
Not slow down — get intentional. Most AI portfolios have too many bets running at the same depth. The fastest path to value is usually fewer bets with more depth, better ownership, and clearer funding logic. That's the shift from FOMO to JOMO — from reacting to every AI announcement to making deliberate bets where you have real leverage.
How is this different from the AI strategy work we already did?
Most AI strategy produces a roadmap of what to build. The gap is rarely the roadmap. It's the operating model: how bets get funded, who owns outcomes vs. experiments, how decisions get made when the data isn't conclusive, and whether teams can actually adopt what gets built. That's the layer I work at.
Our problem is technical — we need better models and data infrastructure.
Sometimes. But in most organizations I work with, the AI works well enough in demos. The breakdown happens moving from demo to production, from one team to cross-functional, from experiment to business outcome. If your POCs keep impressing but never shipping, the bottleneck probably isn't the model.
We're a mid-market company, not an enterprise. Is this relevant?
Mid-market organizations often have a real advantage: shorter decision chains, less political friction, more ability to move quickly. But they also have less room for expensive mistakes. The organizations I typically work with range from a few hundred to a few thousand people — big enough to have the operating model problem, small enough to actually fix it.
What does working together actually look like?
It depends on where you are. Some teams need help seeing their AI investments as a portfolio and deciding what matters. Others know the problem but need help changing how AI work gets framed, owned, and steered. I'd rather understand your situation first than describe a standard engagement. That's what the conversation below is for.
Let's figure out what's actually stuck
Bring your current AI situation: what you've invested in, what's working, and what still feels stuck. No pitch, no preset framework. I'll share what I see and you'll leave with a clearer picture of where the real leverage is.