- Connect
- Posts
- Guardrails, Not Glamour.
Guardrails, Not Glamour.
The biggest wins in 2026 won’t come from “more AI.” They’ll come from better rails: workflows, systems, and coordination.

Hiya 👋
We’re back at it with Connect and the vibe (coding) I’m seeing everywhere: AI is getting smarter fast, but most teams are seeing “adoption” happening as quiet solo hacks.
This week is a quick tour of what leaders should be thinking about to move the needle.
Let’s dive in 👇
1. Agents + guardrails
Agents are going from hype to reality more and more as it’s becoming clearer to developers and technical folks that they don’t do everything, they augment and do parts of workflows.
So, if AI agents do more of the work, your “boring” systems (CRM/ERP/ticketing) don’t disappear, they become the rules and rails that keep the AI business from going off-road.
What Levie’s getting at (in plain English):
Enterprise software = codified process. It’s the stuff you don’t want changing randomly (permissions, approvals, audit trails, data entry, workflows).
Deterministic vs. non-deterministic:
Deterministic systems: “Do it the same way every time” (systems of record).
Agents: great for “messy” work (drafting, summarizing, researching, filling fields, triaging).
The counterpoint worth taking seriously
Avious’ warning: if you treat AI as your strategy engine, you risk becoming generic, because models can struggle with novelty outside their training distribution.
That chart you shared is the key: even with good “examples,” models can fail when the pattern is a new combination (e.g., mixing linear + cosine). That’s a reminder that LLMs can look confident while missing the underlying causal structure.

LLMs handle clean patterns, but struggle when reality is a remix.
My take
For 40+ years, the “gap filler” in every company has been the spreadsheet.
Not because Excel is magical, because it lives in the space between:
off-the-shelf SaaS that mostly fits, and
custom software that used to be too expensive and too slow.
That’s why so many teams end up with DataLake.xls energy: dozens of exports, copy/paste pipelines, “Frank’s formula,” and weekly meetings dedicated to reconciling whose numbers are real.
What’s different now is that AI makes it realistic to fill those gaps with small, purpose-built agents (think: small autonomous utility software) instead of Excel duct tape.
If you’re staring at a nest of spreadsheets and wondering what to buy vs. build vs. automate, hit reply. We can help you sort it out.
Go deeper:
🧵 Read Levie’s thread → The future of enterprise software
🧵 Read Avious’ critique → Why agents struggle with novelty (OOD)
2. The new interview skill: thinking in plain English
Hiring managers aren’t just asking “do you know AI?” They’re testing whether candidates can use an AI tool to produce a sound output, fast. McKinsey is doing this by having candidates show how they’d prompt their internal assistant (Lilli), checking its work, and then applying their judgment.
What to notice
This is less about “AI tricks” and more about a shift in what companies value:
Problem framing beats problem solving. If you can’t define the question, the model will happily answer the wrong one, confidently. McKinsey’s CEO is explicit that AI doesn’t supply “truth” or “judgment.” I agree.
Liberal arts ≠ fluffy. Interestingly, they’re calling out the desire to hire liberal arts grads because the edge is creativity (seeing non-obvious approaches) and judgment (deciding what’s relevant, what’s risky, what’s defensible).
Scale changes the bar. When an “agent fleet” expands from ~10 to ~1,000, the bottleneck becomes: who can steer them well?
A useful mental model: the AI skill ladder
Level 1 - Dabbler: asks questions, copy/pastes answers.
Level 2 - Operator: gives constraints, asks for formats, checks work, iterates.
Level 3 - Systems thinker: turns a messy goal into a repeatable workflow (inputs → steps → QA → output), and can explain why the result is defensible.
McKinsey’s AI interview is basically screening for Levels 2-3: can you use AI like a productive junior teammate, and can you communicate your reasoning clearly?
What this means for résumés + hiring
If you’re a candidate, “used AI” is weak. Strong looks like:
Outcome: what changed (time saved, quality improved, risk reduced)
Method: what you actually did (prompt template, evaluation checklist, human review step)
Judgment: how you handled errors/edge cases (“here’s what I don’t trust it for”)
The practical takeaway for any hiring for AI skills
If “English is the most powerful programming language,” then the best “programmers” in the next few years will be the people who can write:
A clean brief (goal, audience, constraints)
A good prompt (inputs, desired output format, edge cases)
A tight evaluation (what counts as correct, complete, safe)
A final synthesis (a decision or recommendation you’d put your name on)
A quick self-test you can run this week
Take any messy business question (e.g., “Why did our customer churn spike?” or “Which pipeline deals are worth pursuing?”) and force it into this structure before you ask AI:
Decision needed: ___
Definition of “good”: ___ (metrics / timeframe / acceptable assumptions)
Data you trust: ___
Risks / gotchas: ___
Output format: ___ (1 paragraph + 3 bullets, etc.)
That’s the exact “judgment layer” Sternfels is pointing at, and it’s what hiring is starting to probe.
Go deeper:
📰 Read the story → McKinsey tests AI tools + shifts hiring toward liberal arts (Fortune)
📰 Further reading → McKinsey asks graduates to use AI chatbot in recruitment (The Guardian)
3. AI’s best ROI is boring
Reid Hoffman’s take: enterprise AI strategy is backwards. The biggest wins aren’t a Chief AI Officer title or flashy pilots, it’s the unglamorous “coordination layer” where companies quietly bleed hours.
What he means by “coordination layer”:
the flood of meetings, notes, docs, action items, and status updates that keep work moving (and eat time).
turning “what happened in that meeting” into structured, retrievable org memory so progress doesn’t depend on whoever was in the room.
The compounding trick:
AI gains multiply when they’re shareable at the workflow level, and the people closest to the work are the ones who spot what should be automated, compressed, or redesigned.
A practical lens to steal:
If you’re wondering where to start with AI, look for text-heavy work that should become database-ready. Hoffman calls out that language models are unusually strong at converting messy reality into structured inputs (e.g., pulling action items from complaints, turning transcripts into CRM fields), basically bridging the human world to the systems world.
Go deeper:
🧵 Read Hoffman’s thread → Enterprise AI strategy is backwards
5. From solo hacks to team leverage
Sarah Guo makes a point that matches what a lot of teams are feeling: people aren’t waiting for permission to use AI, they’re already using it quietly and individually… and that’s exactly why many companies struggle to “see ROI” on dashboards.
The snag isn’t the tools. It’s the org.
Inside a company, using AI privately is low-risk. Using it visibly, to redesign a workflow, can feel like volunteering to automate your own job before incentives and credit are clear. So adoption happens… just not in a coordinated, compounding way.
What to do with this (even if you’re not technical)
Forget “everyone learn prompt engineering.” Try these coordination moves instead:
Create a safe lane: Pick 1-2 workflows where experimenting won’t backfire (support replies, meeting notes → action items, sales call summaries). Make it explicit that trying AI won’t be penalized.
Standardize the handoff: Don’t aim for perfect outputs, aim for repeatable ones. Define: inputs → expected format → review step → where it gets stored. (This is how personal hacks become team leverage.)
Reward visibility: If wins stay private, they don’t scale. Share the prompt/template + the before/after result, and give credit to the person who improved the workflow.
If you’ve been looking at AI like a training problem, Guo’s framing is a useful reset: it’s mostly an alignment problem.
Go deeper:
🧵 Read the thread → AI Adoption Is a Coordination Problem
Know someone who’d enjoy this newsletter?
Forward it along, or send us a note if there’s a topic you’d like us to unpack next.
New here? Consider subscribing.