They are excited. Not quietly relieved, not pragmatically resigned — excited. The Klarna CEO said it out loud: “AI can already do all of the jobs that we, as humans, do.” The Shopify CEO sent a company-wide memo saying employees must now prove AI can’t do their job before anyone is allowed to hire a human being. These aren’t reluctant admissions. There’s no grief in these statements. There’s glee.
And while you’re out there learning prompt engineering and adding “AI-proficient” to your LinkedIn, they’re building the thing that replaces you entirely. The pacifier is working. That’s the point.
The AI lie
“People who use AI will replace people who don’t.” You’ve heard this a hundred times. It shows up in every newsletter, every think piece, every HR town hall. It sounds reasonable. It sounds like advice. It isn’t.
It’s a delay tactic. Nonsense.
The real message – the one nobody is saying in those town halls – is that the window for “learn to adapt” is much shorter than they’re letting on. They need you to believe you have time, because a workforce that believes it has time doesn’t panic, doesn’t organize, doesn’t push back. It just quietly takes the course on ChatGPT and goes back to work.
Meanwhile Klarna went from 8,000 employees down toward 4,000, and they’re still going. The “adaptation” narrative has a shelf life. It expires right around the moment the model is good enough that even the adapted version of you is redundant.
Here’s the thing nobody wants to say: you are not being prepared for the future, you are being managed through a transition you have no say in. There’s a difference. A big one.
The YOLO year
I started calling 2026 the YOLO year as a joke. Lately I’ve been inflicting it on everyone I know, annoying friends and colleagues with it at every opportunity. Somehow it stopped being funny.
Here’s why 2026 is different – and it’s not what most tech people will tell you. I’ll be honest: I’m not even sure LLMs can go much further than where they are now. Maybe they hit a wall. Maybe the scaling laws run out. Fine. It doesn’t matter. Because even with what we already have, the disruption is going to be massive. We don’t need AGI for this. We don’t need the next breakthrough. The current generation of models, deployed at scale through agents, is already more than enough. It’s about agents. We are already past “chatbot” and well into “multiple AI systems coordinating work autonomously”. Agents that browse, write, execute code, send emails, make decisions. Agents talking to other agents.
And here’s the thing that keeps me up at night: most people haven’t even started using a chatbot yet.
Think about that gap. The average person – your neighbour, your cousin, people at your company who aren’t in tech – still thinks AI is the autocomplete on their phone or some sci-fi concept. They haven’t touched ChatGPT. They don’t know what a prompt is. Meanwhile the conversation among people actually building this stuff has already moved to agentic workflows and multi-agent coordination. The delta between those two realities is not a few months. It’s a different world entirely.
That’s the cognitive dissonance I see everywhere in 2026. The people who know, are talking about one thing. Everyone else is living in a completely different reality, reassured by the occasional “AI won’t replace you!” headline, and just, carrying on.
YOLO, then. Enjoy things as they are, because the gap between these two realities is about to close. Violently, if history is any guide. And the people managing that closure? They are excited about it.
Enjoy life as it is right now. That’s not nihilism. That’s pattern recognition.
The people telling you it’ll be fine
Let’s talk about who’s reassuring you.
Elon Musk walked into Twitter, fired roughly 80% of the workforce – about 6,000 people – and the rest of Silicon Valley watched and took notes. Not with horror. With interest. Within a year, mass layoffs had become normalized across the industry. He ran the experiment. They copied the results.
Last week Jack Dorsey cut 40% of Block’s workforce — 4,000 people — citing AI. Block’s stock jumped over 20% on the announcement. Forbes called it “firing the starting gun on AI layoffs”. He wasn’t even apologetic about it. That was the fourth round of cuts since 2023. Fourth. And each time, the framing is the same: AI made us do it. Not “we’re struggling” — profits were rising. Not “the business changed” — the business is fine. Just: AI exists now, so you don’t have to. Have a nice day.
Then there’s Palantir. Alex Karp – their CEO – has been remarkably honest about what his company actually does. “Palantir is here to disrupt… on occasion kill people.” That’s a direct quote. Said publicly. Without apparent concern that it might be a weird thing to say. Palantir builds targeting and surveillance infrastructure for governments and militaries. Their stock is doing great, by the way. Especially since Trump came back. A company that helps governments track and target individuals is absolutely thriving in the current environment. And they will scale. Because that’s what they do.
Marc Andreessen, venture capitalist, wrote an entire Techno-Optimist Manifesto in 2023. In it: growth is always good, technology always wins, and anyone who disagrees is a stagnation-ist enemy of progress. No nuance about displacement, no acknowledgment that “progress” has winners and losers. Just pure acceleration, dressed up in philosophy.
These are the people setting the agenda. These are the people whose op-eds get published, whose Senate testimony gets treated seriously, whose vision of the future is considered the default.
Now here’s the part that should make you uncomfortable: some of these same people – the ones lecturing us about the future of work – have documented connections to Jeffrey Epstein. Bill Gates met with Epstein repeatedly until 2014, years after Epstein had already been convicted. The Epstein files – partially released under the Epstein Files Transparency Act – show the sheer scope of the network he cultivated across tech, finance, and politics. These weren’t naive associations. These were deliberate. And nobody said anything.
Not a single person in that network blew the whistle. Not one.
And now we’re supposed to trust their judgment on what AI will do for humanity?
Why are people still listening?
I just don’t get this.
We’ve seen the files. We’ve read the manifestos. We know what these people actually think and do. And yet – people still act like they have some special insight into the future. Like they earned the right to tell us how things will go. Like the network, the timing, the connections, the Epstein parties – none of that happened.
The self-made myth is unbelievably durable. Even people who got screwed directly by these systems still kinda look up to the people running them. It’s insane!
Look – they got where they are mostly through timing, access, and opportunity. Talent? Sure, a bit of that too. But mostly the right idea at the right moment with the right people around them. The “I built this from nothing” story is just that – a story. A very useful one if you want people to stop questioning your legitimacy.
Why are CEOs salivating at the prospect of replacing their employees? Why did they all know about Epstein and not say a word? Not one of them.
They don’t see you the way you see them. You see successful people who are maybe a bit ruthless. They see a cost center that’s finally becoming optional.
Unless you’re in the top 0.001% – and you’re not, and neither am I – that’s what you are to them. A resource. And AI is the thing that replaces resources.
So no, I don’t think it’ll be fine. I think we’re sleepwalking through the most obvious setup in recent history, reassured by the exact people who benefit most from our compliance. And we’re listening to them. We look up to them. We quote them at conferences.
Why are you still taking advice from people who can’t wait for you to not exist?
Stay optimistic!