I see a lot of grandiose statements floating around:
You will not be replaced by AI, you’ll be replaced by someone using AI.
AI won’t replace your creativity.
AI can’t do THIS or THAT.
AI will never be able to do THIS or THAT.
And many more.
My take: maybe. Who knows! LLMs are getting better at an alarming rate and they behave in unexpected ways. They make the simplest mistakes but at the same time, they have solved some of the hardest math problems. This is well know as the jagged edge. The future is extremely hard to predict and every day it’s getting harder.
But here’s what AI can’t do now and probably won’t for a while: have true agency.
ChatGPT and Claude do nothing until someone tell them what to do. Same for all LLMs. Even the ones that seem to be always up, like OpenClaw, are just LLMs triggered on a timer. Without the timer, nothing happens.
In concrete terms, no LLM is waking up one day and deciding to start a company. Some people have experimented with putting an LLM in the CEO seat, but those experiments would not have happened without a human applying their agency first. A human starting the company and prompting an LLM to make the decisions. The agency there belonged to the human, not the LLM.
So the question is: what do you do when nobody tells you what to do?
I think people tend to divide into two categories here. Those that wait and those that find something to do. Those that wait may be safe for a while when their specific skills don’t transfer well to an LLM yet. But given enough time, AI will likely acquire all the skills. And then their jobs are at risk.
Those that find something to do are irreplaceable.
One expression of that is deciding to start a company. But it shows up everywhere. Deciding to paint a picture or write a poem. Deciding what they’ll be about, what their aesthetics will be. Those decisions will continue to be irreplaceable.
Well… they’ll be the last thing to be replaced. Because you can only replace them with an entity that is always active (LLMs wake up on a prompt) and that wants things. The moment we have AI that is always active and that wants things, we have bigger problems than our jobs. We’ve created a new species that will eventually be better at everything than us. Hopefully it’ll be friendly.
The purpose of this post is not to dream or be scared about that future. It’s to convey the fact that deciding to do something with no inputs is the final frontier.
If you want your job to be safe, find a way for it to not require inputs. If you need a ticket to write code, you are at risk of being replaced by an LLM that can write the code from the ticket. If instead you write the ticket then you are much harder to replace.
This is why the profession of PM, the Product Manager, is flourishing in the era of AI. Of all the roles at a company is the one that is the most open, the one that has no or very few inputs (finding and deciding what inputs to use is part of the job). All building jobs will be a lot more like a PM in the future (or will have been replaced by an LLM).
Having that agency to do something when nobody asks sets you apart.


















