• Connect
  • Posts
  • AI is creating new roles, not replacing them

AI is creating new roles, not replacing them

AI is creating new roles, burning out early adopters, and rewriting the economics of custom software.

Hiya 👋 

This week's stories share a thread: AI is changing how work gets done in ways that go well beyond the hype. Having said that, the people using AI the most are the ones getting burned out. And I wrote how SaaS isn’t dying, it’s Excel's 40-year chokehold on business operations that’s lossening.

Let’s get into it 👇

1. The GTM engineer is here 🔧

A role that didn't exist three years ago now has over 3,000 open job listings on LinkedIn, and it was born entirely out of AI becoming usable for non-developers.

So what is it? A GTM (go-to-market) Engineer sits at the intersection of sales, marketing, and data operations. They build the automated systems that power how your company finds, reaches, and converts customers, connecting your CRM to enrichment tools, automating personalized outreach, and replacing manual research with AI that pulls what your reps need in seconds.

None of these skills are new. RevOps handled CRM workflows. Growth teams ran automation experiments. SDRs enriched data by hand. But those capabilities used to live in separate roles across separate teams. AI and no-code tools made it possible for one person to do what previously required four.

What this tells us about AI and jobs: The loudest narrative is about replacement. The GTM Engineer is a case study in something more interesting: reorganization. AI collapsed the walls between departments, made the tools accessible, and created a new seat at the table. The people filling these roles aren't engineers who learned sales. They're ops people and former SDRs who picked up automation skills because the barriers dropped.

If you run a small or mid-market company: you probably don't need to hire a GTM Engineer tomorrow. But look at your go-to-market motion and ask, how many people are doing repetitive work AI could handle? How many tools are running in silos? The companies moving fastest aren't adding headcount. They're rearchitecting how existing teams work.

Go deeper:

🧵 Read Varun Anand thread → The rise of the GTM engineer

2. The other side of AI adoption 😓

I sat down with a coffee and read this HBR article this week and haven't stopped thinking about it.

TL;DR: Berkeley tracked 200 employees for eight months after AI adoption. The takeaway: AI didn't reduce their work. It intensified it.

Nobody was forced to use AI. They all chose to. And they didn't work less, they worked faster, took on more tasks, and extended their hours. Voluntarily. That's the part that makes this hard to manage.

Work lost its edges. Prompting AI doesn't feel like working, it feels like chatting. So people prompted during lunch. During meetings. Before bed. It became this ambient thing you could always push forward a little more.

Everyone started doing everyone else's job. PMs wrote code. Researchers did engineering. Companies saw more output from fewer people and thought they'd won. They hadn't, engineers started drowning in review work from non-engineers who were vibe-coding with AI. The productivity gains just moved the bottleneck downstream to the most expensive people on the team.

The loop that should concern every operator: AI speeds things up → speed becomes the expectation → scope expands → intensity rises. Everyone felt more productive. Nobody felt less busy.

The real danger isn't AI making people lazy. It's AI making people feel superhuman until they crash. The extra work is voluntary, looks like engagement, and feels fun. Leadership doesn't see the problem until turnover spikes.

The fix isn't willpower. It's structure. The researchers call it an "AI practice", decision pauses before major calls ship, batching outputs instead of reacting in real time, and protected time for human conversation. The companies winning with AI won't be the fastest adopters. They'll be the ones who build real guardrails around how it gets used.

AI without boundaries is a faster treadmill with no stop button.

Go deeper:
📰 Read the HBR Article → AI Doesn't Reduce Work - It Intensifies It (HBR)

3. Excel is the cockroach. AI is its exterminator 🪳

The "SaaS is dead" debate is the hottest take on LinkedIn right now. It's also wrong. But not for the reasons you think.

Every company I walk into has the same dirty secret: dozens of spreadsheets duct-taped between systems nobody trusts. Your CRM sort of works. Your PM tool sort of works. Everything in between? That's where Excel lives, filling the workflows too specific for any vendor to care about.

Companies have always had three options: force-fit generic SaaS and live with the gaps, buy ugly vertical software that never improves, or build custom. Almost nobody picks option three. It used to mean $750k, eighteen months, and a prayer.

AI changed the math on option three. Not "build Salesforce in a weekend" nonsense. But the connective layer your business really needs? Now buildable in months for a fraction of the cost. The catch? A prototype isn't a production system. The hardest part was never the code, it was understanding how work actually moves through your business.

💡 I wrote a deep dive on this, the 40-year history of why Excel refuses to die, the $63B company profiting from ugly software, and why the economics of custom-built tools have fundamentally shifted.

This newsletter is meant to be a highlight reel for you but I’ll be posting longer-form a couple times a month on Substack. If you want the unfiltered version of how I'm thinking about AI, software, and operations, check it out.

4. Inside Anthropic: The company behind Claude 📖

If you like a good behind-the-scenes look at how companies work, this one's worth your time.

The New Yorker just published a deep (and I mean deep) profile on Anthropic, the company that builds (my AI of choice) Claude. They got inside the building and what they found is genuinely fascinating.

What's really going on inside the box? The New Yorker went looking for answers.

A few things that stood out to me:

  • Anthropic's researchers aren't just building a chatbot. They're running psychology experiments on it.

  • They put Claude through tests designed to measure self-awareness, moral reasoning, and whether it changes its answers just to agree with people.

  • One team studies its internal wiring like neuroscientists. Another treats it more like a patient on a therapy couch.

  • They gave Claude a mini vending machine in the office kitchen, a real one, and tasked it with running the thing as a small business. If it went broke, they'd know it wasn't ready for more responsibility.

The whole thing reads less like a tech company and more like a research lab that happens to ship products.

The article is long…New Yorker long. But if you want to understand what's actually happening inside one of the most important companies in the world right now, this is a great read.

📖 Post of the week

The best satire strikes uncomfortably close to home 🥸

Know someone who’d enjoy this newsletter?
Forward it along, or send us a note if there’s a topic you’d like us to unpack next.

New here? Consider subscribing.