"You will be replaced and you will be happy," has been the consensus, techno-optimist narrative dominating SV culture in the last few years. On the surface, AI replacing humans seems to be inevitable:
Theorem 1 (Human Obsolescence). Let H be a human and A be an AI. Then H is replaceable.
Proof. Let C denote the set of all computer tasks.
(1) H ∈ C (H do computer job)
(2) A ∈ C (A also do computer)
(3) ∴ A ⇒ ¬H (AI do my job) □
Ever since this argument propagated the Twittersphere (X-sphere??), I've felt a physical convulsion to it because it feels too easy. I think with the popularity of Claude Code and Codex, the future looks a little less foggy and it seems like this is the future of human work: instructions of tasks given to agents and you act as the manager.
The AGI-pillers would then say the natural next step is the agents will be doing the instructions to other agents and remove the human. The human is clearly a bottleneck in this case, so it's simply more efficient without them. This is a totally rational argument, but in the thousands of podcast tours, I've never seen someone give the case for human defensibility in the labor market. In the Jensen-Dwarkesh interview, Jensen gets close by saying jobs aren't tasks — AI does the tasks, humans the job — but he's never able to give his argument "why" because he's busy telling Dwarkesh that AI's not the nuclear bomb.
AI-replacement is such a heated topic.Everyone says that the fears of replacement come from opportunity cost: I spent years building a skill that will get replaced by tokens. But it's actually the fact that we're being told that there will be no place for us to provide value at all. It will all be AI, and you will have no purpose.
The rich men south of Sonoma have every incentive to sound the alarm that "human replacement is imminent!!!" They have to secure the next check from Masa. But the narrative is finally shifting a little bit, I think.
The Jensen Interview — a Tangent
The Jensen interview was a breath of fresh air because we've developed this psychosis that AI is equivalent to the nuclear bomb. The folks over at Anthropic ran this huge campaign about the Mythos model and how "we can't release it yet because it's too good. Here <cyber security companies>, figure out how to build defenses for this first."
So, you're telling me there's this 10 trillion-parameter model that Anthropic can't serve yet because they're compute constrained, and the BIGGEST thing we're worried about is that it could chain several dependency vulnerabilities together for a hypothetical cyber attack? But at the same time, the devs maintaining those dependencies have the same capabilities to patch those vulnerabilities with Mythos? I understand that this is supposed to be scary, but this is nowhere near a nuke. It's not even a stick of dynamite. Nuke = global apocalypse, literally the end of the world. I'm sympathetic to Jensen for being baffled by this assumption from Dwarkesh, but Dwarkesh's viewpoint is the consensus one in SV, and it's caused by the narrative pushed for years by the AI companies.
Socio-Responsibility as a Moat
At this point, it's quite clear I mostly agree with Jensen that jobs aren't tasks and that humans won't be replaced by AIs. But I'd love to see someone in public make a deeper case for why — one that's deeper than "well... jobs aren't tasks."
Specifically, I've yet to see someone talk about the socio-responsibility of humans as our "moat." If an AI screws up, all there is to do is say "you messed up here, fix it." It's obvious to then question "ok then, what if the AIs are so good that they never mess up?" Well, something has to be pointing the AI in the correct direction and own the outcome. They must own the outcome because there must be responsibility for being right or wrong.
Why can't an AI own the outcome?It could, but it's in the human's best interest if another human owned that outcome because the human has worse consequences for getting the outcome wrong.
There is an incentive for a human to get it right because they have a job to do. If they don't do that job, they get fired. If they get fired, they can't provide for themselves, their family, and absorb all the social repercussions of not "fitting in." I think this is why Jack Dorsey, from Block, restructured the org so Individual Contributors (ICs) are at the bottom — so they own the outcomes, are responsible whether it goes right or wrong, and are aligned to make the organization successful because they benefit monetarily.
Notes
- 1.Jack Dorsey, "From Hierarchy to Intelligence".