• Connect
  • Posts
  • AI is ready, but the organization isn't.

AI is ready, but the organization isn't.

On managed agents, resistant employees, and the seven frictions no one talks about.

Hiya ๐Ÿ‘‹ 

I've been noticing a pattern in the companies I talk to. The tools are genuinely impressive now. The demos are real. And yet most companies are stuck in the same place they were a year ago: a handful of enthusiastic users, a few experiments that didn't scale, and a leadership team that can't figure out why nothing is sticking.

This week three things landed that all point at the same answer. The gap isn't the model. It's everything around it.

Letโ€™s get into it ๐Ÿ‘‡

1. Anthropic just changed what โ€œrunning an AI agentโ€ means ๐Ÿค–

Anthropic launched Claude Managed Agents this week in public beta. It's easy to miss what's new here.

Building an AI agent that runs a real, multi-step workflow has required months of infrastructure work: containers, state management, error recovery, tool orchestration. Companies weren't doing it because the plumbing was too expensive to build and too fragile to maintain. Managed Agents abstracts all of that away. You define the task, the tools the agent can use, the guardrails, and how deep it can go. Anthropic handles the rest.

The harness is the center. You bring the task and the guardrails. Anthropic handles everything around it.

This is the piece most mid-market companies have been missing. Not "can AI do this?" The answer to that has been yes for a while. The question was always: how do we run it reliably, at scale, without a dedicated engineering team to babysit it? Early customers include Notion, Asana, and Rakuten, all of whom had agents in production in under a week.

For the clients I work with, this is specifically about long-running work. A workflow that pulls contract data and flags renewal risk. A process that runs every week instead of when someone remembers. The problem was never the model. It was the harness. You can't just say "run an analysis on this" and hope a chat interface knows whether to come back in ten minutes or ten hours. That control is now available without an engineering team to build it.

If you've been waiting for agents to be practical for a company your size, the infrastructure excuse is gone.

๐Ÿ“– Read the announcement โ†’ Claude Managed Agents 

๐Ÿฆ See the launch โ†’ Claude on X

2. Nearly a third of your employees are actively working against your AI rollout ๐Ÿ˜

Wharton published new research this week with a number that should stop any ops leader mid-sentence.

What's the story? 31% of U.S. knowledge workers admit to actively working against their company's AI initiatives. Among Gen Z, it's 41%. Meanwhile, 85% of leaders use gen AI regularly. Only 51% of workers do. The gap between what leadership thinks is happening and what's really happening on the ground is wider than most rollout dashboards will ever show.

The researchers point to something specific: unaddressed fears around competence, autonomy, and relevance. Not laziness. Not technophobia. The question employees are asking is "where do I fit when this gets faster than me?" and most companies have never given them a real answer.

The typical AI program is: buy the tool, run a webinar, track logins. None of that touches the real question. And so you get what Wharton calls performative adoption: people appear to comply, the metrics look fine, and nothing changes in how work gets done. Three months later the same cluster of enthusiastic early adopters is getting all the value, and everyone else is doing the work the old way.

If you run a small or mid-market company: Before your next rollout, ask whether you've defined what each person's role looks like after this change. Not "will you lose your job" in the abstract, but specifically: what does this person do more of, and what do they hand off? That clarity converts skeptics faster than any training session.

๐Ÿ“– Read the piece โ†’ AI Adoption Is a Challenge. Here's a Solution.

3. The 7 reasons AI transformations stall (and none of them are the model) ๐Ÿงฑ

HBR published a framework worth keeping. A Microsoft exec and a Harvard researcher identified what they call the last mile problem: the specific frictions that keep companies stuck between "we have the tools" and "the tools are changing how we work."

The seven frictions they name: process debt, tribal knowledge, the productivity paradox, pilot proliferation, the efficiency trap, agentic governance gaps, and architectural complexity. None of these are model problems. They're all organizational ones.

The efficiency trap is the one I keep seeing most. It's easy to automate a broken process. You get speed without improvement, and now you have a faster version of the thing that was wrong to begin with. Every company I walk into has at least one of these. A workflow that's been automated before anyone asked whether it should exist in the first place. The right order is redesign first, automate second. Most companies do it the other way around, usually because redesigning is harder and automation feels like progress.

The other thing the piece gets right: pilot proliferation. Dozens of experiments, none embedded. When I hear a company say "we have a lot of AI going on," that usually means no one is accountable for any of it making it into production. It's activity without outcomes.

The piece is useful because it gives names to things leaders already feel but can't articulate. When you can name the friction, you can prioritize it.

๐Ÿ“– Read the piece โ†’ The Last Mile Problem Slowing AI Transformation (paywalled)

4. What it really looks like when the gap closes ๐ŸŽ™๏ธ

I sat down with Kevin Crane on the Digital Transformation Podcast to talk through what we see in companies that make this work.

A lot of the conversation came back to a distinction I keep using with clients: single-player AI versus multiplayer AI. The solo experience is well-understood now. You open Claude or ChatGPT, you save time, you're faster. But the moment that workflow touches four people and two systems, most of it falls apart. Nobody agreed on the data. The output lands in the wrong place. Someone in customer support doesn't have access to what someone in sales was using. The AI starts to break down because the underlying data was never connected in the first place.

We talked through a 120-person architecture firm we worked with. Their whole org wanted AI, that was the brief. So we picked one workflow: finding RFPs. Three or four people were spending about 10 hours a week each going through municipal websites and university procurement pages, reading through bids, deciding if they were a fit. We built an agent that does all of that automatically, pulls the RFPs, scans them against their criteria, surfaces the relevant ones. About 40 combined hours a week freed up, in one focused pilot.

The part I liked most: those people didn't lose their jobs. They started spending more time on relationship building and writing better proposals for the bids that were found for them. Nobody grew up with the dream of moving spreadsheets around all day. When you take that away, people tend to get their time back for the work they actually wanted to be doing.

That's the kind of result worth chasing. Not the AI strategy deck. The thing that saves someone 10 hours a week and never breaks.

๐Ÿ“– Post of the week

Said out loud what most people are still tiptoeing around ๐Ÿ‘‡

Know someone whoโ€™d enjoy this newsletter?
Forward it along, or send us a note if thereโ€™s a topic youโ€™d like us to unpack next.

New here? Consider subscribing.