- Connect
- Posts
- The Week AI Started Making Sense (Sort Of)
The Week AI Started Making Sense (Sort Of)
From Claude’s new “Skills” to the not-so-bubbly AI market and a dashboard reckoning, here’s what matters this week.

Hiya 👋
Every week brings a flood of AI headlines - tools, models, opinions. Most of it blurs together, but a few stories this week cut through the noise.
Anthropic quietly redefined what it means to teach an AI. Derek Thompson questioned whether we’re even in a bubble at all. And Chandra Narayanan reminded us why dashboards became the enemy of clarity.
All different stories, but they share one theme: AI is growing up - and forcing us to, too.
Let’s jump in👇
1. Claude skills: A simpler way to expand AI’s brain
The Lede:
Anthropic just dropped something quietly powerful - Claude Skills. Think of them as plug-and-play abilities for AI. Instead of writing a new app or building an integration, you hand Claude a folder that teaches it how to do something.
Why it matters:
For a while now, I’ve been talking about AI in business a “single player” and “multiplayer” game. Claude is serving most people where they are in their “AI Journey”: single player needs where one person is working with AI directly on their work.
For years, we’ve relied on APIs and complex context protocols (like MCP) to make AI “do more.” But that approach burns tokens and developer time. Skills flip that: you teach the model directly, using plain text and scripts so you can get more out of AI 1:1 versus waiting for your company to figure out their AI roadmap. (Happy to help though 😉)
What to know:
Each Skill is just a Markdown text file with instructions, plus optional scripts or resources.
Claude only loads a skill when relevant, so it doesn’t eat context.
This makes it incredibly efficient for specialized tasks like brand writing, Excel automation, or internal workflows.
You can even build your own, a folder of Markdown files and Python snippets becomes a “mini-agent.”
Example from Simon Willison’s test:
Claude used a Slack GIF Creator skill to generate an animated GIF from a text prompt, validating file sizes and all.
It’s not about the GIF, it’s about showing how Claude can now execute real code logic safely, on demand.
This is one of those rare AI upgrades that simplifies instead of complicates. Skills make AI more teachable, closer to how humans learn new tools: by reading the manual, not rewriting the system.
Go deeper:
2. The “AI bubble” everyone sees, and still bets on
The gist:
If everyone agrees AI is a bubble, and yet no one’s pulling back, maybe the crowd has it wrong. Derek Thompson argues the hype is real, but so is the substance behind it.
Why it matters:
We’re watching a paradox in motion: companies are calling AI overvalued while investing trillions to expand it. This isn’t tulips or dot-coms, it’s an industrial buildout, one data center at a time.
What to know:
In 2025, AI firms drove nearly 70 % of U.S. market gains, even as CEOs warned of correction.
Spending is off the charts - Oracle, Nvidia, and OpenAI are weaving themselves into a “financial ouroboros,” where each funds the others’ growth.
The risk isn’t fake products, it’s real projects financed faster than the market can digest.

Bloomberg’s map of the AI money machine - Nvidia, OpenAI, Oracle, and Microsoft in a loop of capital and compute.
We throw around “bubble” whenever something grows too fast to explain neatly. But AI is infrastructure now, not speculation. The smarter question isn’t if it bursts; it’s how the system absorbs a real-time industrial revolution.
Go deeper: Why AI Is Not a Bubble → Derek Thompson
3. From dashboards to decisions
The short version:
In the early days of Digital Transformation, dashboards were meant to bring order.
But with data still a mess for both teams, they’ve merely buried us under charts, tabs, and conflicting numbers. Chandra Narayanan calls it “death by 10,000 dashboards”, and if you’ve ever opened five analytics tools to answer one simple question, you know exactly what he means.

When clarity became clutter, dashboards turned control rooms into chaos.
Why it matters:
Somewhere along the way, “data-driven” turned into “dashboard-driven.” Every team built their own version of the truth. The result: we measure more but understand less. Data itself often isn’t the issue, the way we aggregate and deliver it is.
What’s going wrong:
Every product now ships with analytics, fracturing the source of truth.
BI teams were rewarded for output, not insight, more dashboards instead of better ones.
Dashboards report what happened but rarely explain why.
Where it’s headed:
As teams think about modernizing their digital operations, they need to think about the next generation of analytics won’t live in static grids. It’ll feel like a conversation.
With that in place, as AI gets better, agentic systems can surface the “why” automatically, align on shared definitions, and adapt stories as the data shifts. That’s what clarity should feel like, data that speaks your language and updates itself.
📖 Post of the week
The future, in draft mode 🌀
Things that seem true about AI today that will probably be laughably wrong in a year:
- It still seems underrated that ChatGPT has replaced 85% of my Google usage for me and has become a most used app
- The reason we’re so upset about slop is cause it’s obvious we’re all going— Jeremy Giffon (@jeremygiffon)
10:00 PM • Oct 9, 2025
Know someone who’d enjoy this newsletter?
Forward it along, or send us a note if there’s a topic you’d like us to unpack next.
New here? Consider subscribing.