A CTO wrote a LinkedIn post that’s making the rounds. It resonates. People are scared, and the words give shape to that fear:
“AI coding agents are a mass betrayal of the software development industry by tech founders who started out as developers themselves.”
The post continues. Tech companies supercharged the middle class after the dotcom bust. Now they’re decimating it. Replacing developers with deskilled labor. Creating an underclass of data annotation workers. It’s not innovation—it’s consolidation of wealth and power. The revolution you’re not invited to.
The fear is real. The rage is genuine. And thousands of developers are nodding along.
I understand it. Change hurts. Watching skills you worked years to develop become less relevant: that’s not abstract. It’s personal. And there’s something particularly gutting about feeling like the people who built tech alongside you are now building tools to replace you.
But here’s what strikes me about this post: This is a CTO writing from a place of complete powerlessness.
Someone in a position of significant leadership—someone who should be empowering their team, exploring new tools, adapting strategy—is instead spreading defeatism. This person sounds less like a technology officer and more like someone who’s internalized the belief that things happen to you, not with you.
If a CTO feels this powerless in the face of new technology, maybe the problem isn’t the technology. Maybe it’s the toxic leadership culture they’ve absorbed. Maybe it’s the way they measure value. Maybe it’s the excuses they’ve been accepting instead of the agency they’ve been avoiding.
AI agents aren’t a betrayal.
They’re a test.
A test of whether you’ll adapt or make excuses. Whether you’ll build or blame. Whether you’ll take agency or wait for someone to save you.
And right now, a lot of developers (even ones with “Chief Technology Officer” in their title) are choosing excuses.
Let me tell you why that’s wrong. And why it’s dangerous.
You Already Accepted This Abstraction
Do you write your own sorting algorithms?
Of course not. You use the ones that come with your language, or you pull in a package. You don’t write your own JSON parsers. You don’t implement HTTP clients from scratch. You use NuGet, npm, pip: whatever your ecosystem provides.
Nobody calls you “deskilled” for that. Nobody says you’re being replaced by “package manager babysitters.”
We accepted this abstraction because it makes sense. Why waste time reimplementing solved problems when you can solve new ones?
But here’s what we learned the hard way: automation without human oversight fails spectacularly.
The 2010 Flash Crash demonstrated this perfectly. Algorithmic trading without human judgment wiped out $1 trillion in market value in minutes. High-frequency trading algorithms created a “hot potato” effect—passing the same positions back and forth until the system collapsed.1 When volatility increased, automated systems simply withdrew, removing liquidity exactly when it was needed most.
That’s not an argument against automation. It’s an argument for the right kind of automation: tools that amplify human judgment, not replace it.
I’m a developer. I’m not content writing subroutines optimized by highly intelligent people all over the world. That’s not where value lives. I love applying proven methods to new problems. Exploring new ideas. Testing new concepts.
Obsessing over the syntax of a specific, highly-optimized implementation? That’s neurotic. We solved that problem with package managers years ago.
So here’s the question that should make you uncomfortable:
If we accepted package managers abstracting away algorithms, why resist AI agents abstracting away boilerplate?
What’s the meaningful difference between using a library function instead of writing your own implementation, and using an AI agent to generate CRUD endpoints instead of writing them yourself?
Both are abstractions. Both let you work at a higher level. Both free you to focus on the novel parts of your problem.
The difference isn’t the abstraction level.
The difference is that you’ve confused lines of code with value.
And that confusion is about to get very expensive.
The Mona Lisa at the Vending Machine
The LinkedIn post includes this gem: “Boasting that the new Gemini model can create 3000 lines of code from a single prompt… is not the revolution you think it is.”
Let me tell you what 3000 lines of AI-generated code actually is.
It’s a photograph of the Mona Lisa, reprinted with laser precision. Perfect copy. Flawless reproduction. Completely identical to the original in every measurable way.
And worth absolutely nothing.
Because it didn’t solve a new problem. It didn’t create new value. It copied existing patterns and spit them out faster. That’s not innovation. That’s automated reproduction.
You know what is valuable?
One line of code that solves a real human problem.
One line that does something that wasn’t being done before. One line that makes someone’s life meaningfully better. One line that opens a door that was previously closed.
If you measured your value by lines of code produced, AI didn’t devalue you. You already had the wrong metric.
The author is outraged that AI can generate massive amounts of code quickly. But he’s revealing what he thinks software development is: typing. Volume. Production of text files.
That was never the job.
The job is understanding what problem needs solving. Designing a solution that fits the context. Making architectural decisions that will hold up over time. Evaluating whether what you built actually works. Maintaining and evolving systems as requirements change.
Let me be very clear about where value lives:
Writing the 3000 lines? That’s low-value work.
Knowing what 3000 lines to write, why, and how they fit into the broader system? That’s high-value work.
AI handles the low part. Developers who understand the distinction get elevated to the high part.
Developers who don’t understand the distinction are angry that their “skill” is being commoditized.
But it was always the wrong skill to be proud of.
The Arrogance of Calling It “Deskilled”
Let’s talk about that phrase: “deskilled labor.”
The author claims AI will “replace us all with deskilled labor.” As if the people who come after us, who work differently than us, who use different tools than us, are somehow less skilled.
How dare we.
Someone who can effectively work with AI agents needs to:
- Describe a genuinely human problem in a way a system can understand
- Dissect complex requirements into actionable components
- Evaluate whether a solution actually solves the problem
- Integrate AI-generated code into a broader system architecture
- Iterate and refine based on real-world constraints
- Maintain and evolve systems over time
And you want to call that “deskilled labor”?
We value ourselves too highly. We’ve confused our specific implementation of problem-solving with problem-solving itself. Because we solve problems by typing code, we think that’s the only valid way.
The skills required to work effectively with AI are not lesser skills. They’re different skills.
And in many cases, they’re higher-order skills than syntax memorization and boilerplate generation.
Working with AI requires deeper problem understanding. You can’t prompt what you don’t understand. It requires better system design thinking—AI writes code, you design systems. Sharper critical evaluation—can you spot when AI is wrong? Stronger domain expertise—you can’t effectively direct what you don’t deeply know. Clearer communication—can you articulate exactly what you need?
These are the skills that were always supposed to matter. AI just made it impossible to hide behind syntax memorization and call that expertise.
Cognitive psychologist Lisanne Bainbridge identified this in 1983 with what she called the “Ironies of Automation”: the more efficient an automated system becomes, the more crucial human contribution becomes.2 Automated systems multiply errors until they’re fixed or shut down. The fatal crash of Air France Flight 447 demonstrated this tragically—automation failure put pilots in a situation they weren’t prepared to handle.
The “deskilled” accusation reveals the accuser’s values.
You think skill equals typing code. You think expertise equals remembering syntax. You think value equals manual implementation.
Wrong. It never was.
AI just exposed your mismeasurement.
The Railroad Firemen Were Proud Too
But let me acknowledge something important: Skills are being commoditized. This is nothing new.
There are no real railroad firemen anymore.
Those men had serious skills. Skills honed in the heat of a blazing furnace, with coal dust in their lungs and the rhythm of the rails under their feet. Skills that were absolutely necessary to push cargo and people into the American West. Difficult, dangerous, proud work.
They deserved to be proud of it.
So where did they go?
Some never worked in another field. The skill was too specific, the transition too hard, and they aged out or retired early. Some moved on. They took what they learned at the mouth of the firebox—heat management, timing, pressure systems, logistics—and found new ways to apply it.
Time moved on. Diesel locomotives made them obsolete.
We remember them fondly now. Some people even build model railroads in celebration of that era. Museums preserve their tools and their stories.
Does that devalue the skill of knowing how to feed a coal-fired engine?
Absolutely not.
But every moment, somewhere, a skill you passionately developed can become obsolete. And when that happens, you have a choice.
If we don’t grow, we die.
It’s not about your skills being replaced—unless you let it be. The railroad firemen were a proud, tough breed. But the ones who adapted found new ways to contribute. The ones who didn’t became historical curiosities instead of ongoing contributors.
Applied to coding, the pattern is the same.
Writing perfect boilerplate? Memorizing syntax? Manually implementing patterns everyone’s already solved? Those are the fireman skills: specific, valuable once, being replaced now.
Problem-solving? System architecture? Domain expertise? Critical evaluation? Those are the meta-skills. More valuable than ever.
The question isn’t whether your current implementation skills will remain relevant. They won’t.
The question is: Are you a railroad fireman demanding coal-fired engines stay relevant? Or are you learning what comes next?
Because the train is moving either way.
Let the Mockers Fuel You
The author writes about “the annoying c-suite exec who always looked down on you” using AI to “justify their belief that your contribution was always overrated.”
I’m not going to lie to you. The mockery is real.
There are executives who never respected developers. Who always thought engineering was overpaid, overvalued, and overhyped. Who are using AI as the latest excuse to express their contempt.
That c-suite exec who looked down on you? They’re celebrating right now. Forwarding articles about Gemini generating 3000 lines of code. Talking about “efficiency gains” and “rightsizing the team.”
So what are you going to do about it?
You have two choices.
Let it defeat you. Let it confirm every fear about being disposable, about your work not mattering.
Or let it drive you. Let it fuel your determination to prove them wrong. Build something they can’t ignore. Create value they can’t dismiss. Demonstrate capabilities that make their contempt irrelevant.
The mockery isn’t coming from the technology. It’s coming from people who never respected the work in the first place.
AI didn’t create that contempt. It just gave them a new excuse to express what they already felt.
If your CEO is using AI to mock you, the problem isn’t AI. The problem is you’re working for an asshole.
And now you’re letting them define what has value?
Mockers burn out and move on. Bad leaders cycle through companies, leaving wreckage. The question is: Will you still be standing when they’re gone?
You know what’s real? Solving problems that haven’t been solved before. Building things that make a genuine difference. Creating value that didn’t exist.
If bad leadership mocks you while you do that work, that’s on them. Not on you. And definitely not on the tool.
Do You Control Your Own Destiny?
But here comes the real objection. I can hear it now.
“Sure, that sounds nice. But companies will just use AI to fire 80% of developers and make the remaining 20% supervise AI at the same salary. The person who benefits from your ’elevated creativity’ is the CEO who gets to cut headcount.”
Okay. Let me ask you a question.
Do you control your own destiny? Or are you content to follow small-minded leaders down the path of destruction, complaining the whole way?
This is a bootstrap moment. Will you be defeated by the leaders around you? Or will you rise to lead the next wave?
Let’s be very clear about the two stances you can take.
The passive victim stance:
- Waiting for companies to “let” you be elevated
- Hoping bad CEOs won’t exploit the technology
- Demanding protection from change
- Waiting for someone—a union, a regulation, a benevolent leader—to save you
The agency stance:
- Learning to leverage AI right now, without waiting for permission
- Building things that were impossible before
- Creating your own opportunities
- Becoming indispensable because you can do what others can’t
- Proving value through what you create, not through what you demand
Here’s what the author’s framing misses entirely: It assumes you’re an employee at the mercy of executives. That you’re powerless. That things happen to you, and all you can do is endure or complain.
But developers have more agency than almost any other profession.
You can build side projects. Switch companies. Start businesses. Work independently. Prove value through output, not credentials. Learn new tools without asking anyone’s permission.
The real question isn’t whether companies might use AI to cut headcount. Some will.
The real question is: Are you going to let fear of what companies might do prevent you from seeing what you could do?
Here’s the harsh truth.
If your only value proposition is “I can write code faster than AI,” you’re right to be worried.
But if your value is “I can solve problems AI can’t even see,” you’re going to be fine. Better than fine. You’re going to be in demand.
Stop Making Excuses. Own Your Choices.
“Easy for you to say. Not everyone has the privilege to ‘bootstrap’ or start side projects. Some people have families, mortgages, responsibilities.”
“This isn’t about democratization. It’s just consolidation of wealth and power.”
Let me address both excuses at once.
My father was a school teacher. In 1984, it took everything he had to buy our first computer—an Apple //e. That’s what “access” looked like then. A massive financial sacrifice just to have a machine you could develop on.
Today? Anyone with a computer can use OpenCode and free AI models. Right now. The barriers are lower than they’ve ever been in the history of computing.
And yet the excuse remains: “I don’t have the privilege.”
I have non-Hodgkin’s lymphoma. I’m going through chemotherapy. I have a mortgage and three kids depending on me. And I built passmaker.io in the middle of that—while my body is being poisoned to keep me alive, while I’m exhausted and nauseous and scared.
If I can do it fighting cancer with a family depending on me, what’s your excuse?
I’m not dismissing real constraints. Life is hard. Circumstances are difficult. I know that better than most. But this isn’t about having perfect circumstances. It’s about having will.
The question has always been: What will you do with what you have?
Now, about that “consolidation of wealth and power” claim.
The tools are being democratized. Whether you use them democratically is up to you.
Consumer democratization: Use Claude/GPT/Gemini. You’re dependent on their infrastructure, pricing, and decisions. You have access, but not control.
Real democratization: Build it. Host it. Understand it. Modify it. Actual independence.
Both paths are valid. But own your choice.
I use Claude but despise the lock-in. I hate Google selling my data. So I host my own mail system and built a server in my basement with two Tesla P40 GPUs. Why? To understand what it takes to not be dependent.
I’m an Apple fanboy because their products work and I don’t want to waste time fixing things I don’t care about. Do they deserve to profit? Hell yes. But I’ll never claim “they just get richer and I have no choices.” That’s choosing victimhood when you have options.
If I wanted, I could use Android. Linux. Self-host everything and never touch a proprietary service.
It’s still about choices.
Start where you are. Use free tiers. Learn what’s possible. Build something real.
Then decide: Stay on commercial platforms and focus on building, or go deeper into self-hosting and independence. Both are valid.
The only invalid position is this: Using commercial platforms by choice while complaining you have no choice. Claiming it’s “just consolidation” while doing nothing to change it.
Are you making informed choices? Or complaining about consolidation while actively choosing convenience and blaming others?
If it’s the latter, you’re not a victim of consolidation.
You’re a participant in it.
And you’re using “privilege” and “consolidation” as excuses to avoid the work of adaptation.
The tools are free. The barrier is gone. Stop making excuses. Start making things.
Reality Check
I’m not going to pretend everything is fine.
Some developers will lose their jobs. Companies will use AI to justify layoffs—some already are. Toxic leadership exists and will exploit any tool to consolidate power and cut costs.
The transition is uncomfortable. Sometimes painful. There’s real grief in watching skills you worked hard to develop become less relevant. Not everyone is in a position to adapt quickly.
These concerns are real. I’m not dismissing them.
But here’s what I want you to understand.
This has happened before. And we adapted.
When spreadsheets arrived, accountants didn’t disappear—they started doing more sophisticated financial analysis. When high-level languages arrived, assembly programmers didn’t disappear—they moved up the stack or into specialized domains. When package managers arrived, we didn’t become “library babysitters”—we started solving bigger problems.
The pattern is always the same.
Lower-level implementation gets automated. Higher-order thinking becomes more valuable. People who adapt thrive. People who refuse to adapt struggle.
A comprehensive Brookings Institution study analyzing automation from 1980 to 2016 found that only 25% of jobs face “high risk” of automation—where more than 70% of tasks could be automated.3 The remaining three-quarters involve tasks requiring judgment, creativity, and social intelligence that resist automation. The World Bank’s 2019 analysis confirmed that historically, technology creates more jobs than it destroys, though the nature of those jobs transforms.4
AI isn’t special. It’s just the next turn of that wheel.
But here’s what’s different this time: You have more agency than you think. The tools are more accessible than ever before. The barrier to entry for learning is lower than it’s ever been in the history of computing.
Even manufacturing’s celebrated “lights-out” factories—where robots build other robots—still require human workers for quality control, exception handling, and maintenance after two decades of operation.5 Facebook’s AI systems labeled videos of Black men as “primates” in 2021,6 demonstrating that automated systems still lack the contextual understanding human judgment provides.
AI won’t eliminate developers. It will make human judgment more critical, not less.
And if you’re in an abusive employment situation with toxic leadership that makes you feel powerless? If you’re working for that c-suite exec who always looked down on you and is now using AI as justification for their contempt?
The problem isn’t AI. The problem is where you work.
And waiting for that to change without taking action—that’s just choosing to stay a victim.
What to Do
If You’re Scared
Start learning now. Use OpenCode with free models. Build something small. See what’s possible. Don’t wait for your company to train you. Don’t wait for permission.
Focus on problems, not code. What human problems can you solve? That’s where value lives. That’s what AI can’t replace.
Build your meta-skills. Problem understanding. System design. Critical evaluation. Domain expertise. The things that let you know what to build and whether it works.
Prove your value through output. What you can create matters more than years of experience or impressive credentials.
If You’re Angry
Channel it. Let it drive you, not defeat you. Use that energy to build something. To learn something. To become something more than you were.
Evaluate your situation honestly. Are you in a toxic environment? If so, that’s the real problem. Not AI. Fix that first, even if it means leaving.
Take agency. Build something. Switch companies. Start something new. Stop waiting for permission from people who don’t respect your work anyway.
Stop letting mockers define your value. Bad leaders will always find an excuse to dismiss good work. Don’t give them power over your self-worth.
If You’re Organizing
Organize around building, not protecting against change.
Help each other learn. Share knowledge. Build together. Lift each other up.
The energy you spend bemoaning your situation could be spent empowering others. Creating collectively. Showing what’s possible when developers support each other instead of competing in a race to the bottom.
Stop collective victimhood. Solidarity doesn’t mean waiting for someone to save you. It means saving yourselves together.
Do All of It
The energy you spend making excuses is the same energy you could use to organize tech workers around learning and growth. To empower others who are struggling. To build your dream application. To learn new tools and approaches. To create opportunities that didn’t exist before.
We are all better collectively. But collective action doesn’t mean waiting. It means building together.
The Choice
The railroad firemen were proud. They were skilled. They were necessary.
Then they weren’t.
Some adapted. They took what they learned about heat, pressure, timing, and logistics and found new ways to apply it. They moved on. They contributed to the next era of industry and infrastructure.
Some didn’t. They demanded the world keep using coal-fired engines. They complained about the people who “betrayed” them by building diesel locomotives. They became historical curiosities instead of ongoing contributors.
Which one will you be?
AI agents aren’t a betrayal. They’re not a conspiracy. They’re not the end of software development.
They’re a tool. Like every tool that came before.
The question isn’t whether they’ll change your work. They will.
The question is what you’ll do about it.
Adapt or make excuses. Build or blame. Take agency or wait for someone to save you.
The train is moving. You can get on board, or you can stand on the platform complaining that trains are a betrayal of the proud tradition of walking.
But either way, the train is moving.
Choose wisely.
Have thoughts? Disagree completely? Built something amazing with AI? I’d love to hear about it. The conversation matters more than agreement.
References
Kirilenko, A., Kyle, A. S., Samadi, M., & Tuzun, T. (2017). “The Flash Crash: High-Frequency Trading in an Electronic Market.” Journal of Finance, Forthcoming. Available at SSRN: https://ssrn.com/abstract=1686004 ↩︎
Bainbridge, L. (1983). “Ironies of Automation.” Automatica, 19(6), 775-779. ↩︎
Muro, M., Maxim, R., & Whiton, J. (2019). “Automation and Artificial Intelligence: How machines are affecting people and places.” Brookings Institution. Retrieved from https://www.brookings.edu/articles/automation-and-artificial-intelligence-how-machines-affect-people-and-places/ ↩︎
World Bank (2019). World Development Report 2019: The Changing Nature of Work. ↩︎
Wikipedia: “Lights out (manufacturing)” - https://en.wikipedia.org/wiki/Lights_out_(manufacturing) ↩︎
Reuters (2021): “Facebook apologizes for AI labeling video of Black men as ‘primates’” - https://www.reuters.com/technology/facebook-apologizes-ai-labeling-video-black-men-primates-2021-09-03/ ↩︎
