I've been thinking about this for a while now. There's a growing push for AI to replace as much human work as possible. AI agents handling tasks end to end. Entire roles automated away. And the pitch is that this is a good thing — that it will free people up, that society will flourish when we all just do less.

I keep coming back to why that doesn't sit right with me. And I think it comes down to purpose.

What happens when purpose goes away #

I sometimes think about people who retire after long careers. You hear about it — someone who spent decades contributing, showing up every day, having a reason to get out of bed. And then they stop. And for some of them, things go downhill much faster than anyone expected. Not because retirement itself is bad, but because the thing that gave their days meaning and structure was suddenly gone.

Now think about that happening at a larger scale. A whole generation being told that their skills don't really matter anymore. That an AI can do what they do, faster and cheaper, and they should probably step aside.

I don't think most people actually want to be freed from work. They want their work to feel worthwhile. They want to get better at it, do more with it, feel like what they contribute actually matters. Take that away and I don't think you free people — I think you take something from them that's hard to get back.

The vibe coding question #

There's a related thing happening in tech right now — vibe coding. The idea is that you describe what you want and AI builds it for you. No deep expertise needed, no real understanding of what's happening underneath.

And honestly, it works up to a point. I use AI extensively when I build. I'm not a developer by trade, but AI has made it possible for me to turn 20+ years of digital experience into actual working tools — things that would have required a development team before. I'm genuinely grateful for that.

But here's what I keep noticing. There's a difference between using AI as a tool and treating expertise as something you can skip. Between a prototype that runs and a product that someone can rely on. The AI helps me move faster precisely because I understand what I'm building and why. The experience isn't replaced by the AI — the experience is what makes the AI useful.

My concern isn't vibe coding itself. It's the message underneath it that keeps surfacing — this idea that human skill and experience are things to route around, rather than things worth investing in. And I see that same message across the broader AI conversation too.

Technology has always been about helping people do more #

If you think about how technology has actually worked throughout history, there's a pretty consistent pattern. The printing press didn't replace writers — it gave them reach. Spreadsheets didn't replace accountants — they made them faster. Power tools didn't replace carpenters — they let them take on bigger projects.

The technology made people more capable at the work they were already doing. It didn't try to make them unnecessary.

That's the approach I take with everything I build. Content Checker Pro doesn't write content strategy for you — it gives the consultant better data to work with so they can deliver a stronger audit to their client. Mindful Reader, which is coming very soon, doesn't decide what you should think about — it helps you be more intentional about what you read, so the time you spend actually counts for something.

You stay in the middle. The tool helps you be better at what you're already doing.

More productive, not less needed #

The question I keep coming back to before building anything is pretty simple: does this help someone do their work better, or does it try to do their work for them?

If a freelance marketer uses one of my tools to deliver a more thorough audit to their client — that's a good outcome for everyone. They're still the expert. The tool helped them be more productive. Their client got a better result because a skilled professional used a good tool well.

But if an AI agent generates that audit and sends it straight to the client with no one in between, I'm not sure who really benefits in the long run. The client might get something adequate. But the professional who spent years building that expertise just became unnecessary. And I think we lose something real when that happens — something that's harder to see in a productivity chart but matters a lot to the people involved.

Where I land on this #

I want to be clear — I'm not against AI. I use it every day. I build with it. I think it's genuinely one of the most significant technologies we've seen and I'm excited about what it makes possible.

But I think we're getting something wrong when we treat human work as a problem that needs solving. People want to contribute. They want to be good at something. They want to feel that what they do matters.

I'd rather build tools that help people do that. That's what I'm trying to do, and I think it's the better path for where AI should be heading.

If you're interested in the tools I've built with this approach, Content Checker Pro is available now, and Mindful Reader is coming soon.