I wrote a post telling developers to stop being victims of AI. Then I realized I was the one who missed the point.
Jim Amos had warned about AI agents—betrayal, consolidation of power, the end of meaningful work. I read it as fatalism and fired back with agency, adaptation, and resilience.
Then I sat with it longer. Jim wasn’t being fatalistic. He was naming a threat I couldn’t see clearly.
This is what I learned by defending agency too hard.
I Was Right About the Psychology
Words like “betrayal” and “crushed” and “you’re not invited to this revolution” do communicate defeat. They risk making people feel powerless before they’ve even tried to act.
I meant every word I wrote about that. Learned helplessness is real. Identity collapse is real. The psychological move from “I’m valuable” to “I’m disposable” happens faster than the actual displacement.
Protecting against that collapse isn’t naive. It’s necessary.
But Jim wasn’t trying to advocate for the attitudes that create the collapse. He was trying to prevent it by naming the threat early—before it gets normalized, before we wake up one day and realize we accommodated ourselves out of existence.
I heard fatalism. He meant warning.
That’s on me.
I Underweighted the Structural Reality
My whole argument was: you still have agency, skills shift rather than vanish, become indispensable through higher-order thinking.
That’s true. It’s also incomplete.
Jim’s point—the one I didn’t engage seriously enough—is that some people’s endgame isn’t “everyone adapts,” it’s oligarchic control.
I assumed the creative system needs humans because it always has. That’s a bet, not a law.
Jim is saying: what if this time is different? What if the only humans left in the loop are the billionaires at the top? What if “become indispensable” just means you get replaced slightly later while power concentrates in fewer and fewer hands?
I don’t know if he’s right. But I dismissed the question too quickly.
Because if he’s right, then my entire framework—“build anyway, learn anyway, prove you’re not disposable”—becomes a way to feel in control while the ground shifts underneath you anyway.
That’s the thing I didn’t grasp.
We Were Arguing at Different Time Horizons
I was arguing from the present: what should I do today to stay effective, valuable, and psychologically intact?
Jim was arguing from the terminal trajectory: where does this all lead if the incentives don’t change?
Both matter. Neither negates the other.
But I treated his long-term structural claim as if it were a short-term morale problem. That was dismissive, even if I didn’t mean it to be.
He wasn’t saying “stop building.” He was saying “name what’s actually happening before we lose the ability to resist it.”
I wasn’t saying “ignore the threat.” I was saying “don’t internalize disposability while you still have room to act.”
Different questions. Both necessary.
The Daniel Analogy Still Holds—But It’s Harder Than I Made It Sound
I keep coming back to Daniel in Babylon.
He didn’t seek martyrdom. He learned the system. He became valuable. He excelled. And when the system demanded he compromise what mattered most, he refused.
That’s still the model. But here’s what I glossed over in my original post:
Daniel didn’t know if his refusal would work.
He stood in front of the king knowing he might die. He prayed facing Jerusalem knowing the lions were real. He didn’t have a guarantee that faithfulness would lead to rescue.
He refused anyway.
That’s not triumphalism. That’s something harder: deliberate faithfulness under structural pressure, without certainty of safety.
My original post made it sound too easy. Like “just be excellent and you’ll be fine.”
But Jim’s warning forces the real question: what if excellence isn’t enough? What if the system doesn’t care?
Then faithfulness becomes something deeper than strategy. It becomes defiance of the story that says you don’t matter.
And maybe that’s the real point.
Sacred Humanism as the Missing Framework
I’ve been working through this idea lately: sacred humanism—the belief that human dignity is real and non-negotiable because it’s grounded in something beyond us.
Not secular humanism (humans matter because we say so).
Not theocracy (humans matter only if they obey).
Something else: humans matter because they bear the image of God, and God dignified humanity by becoming human.
That framework changes everything about this AI moment.
Because if human cognition is sacred—not just useful, but sacred—then building systems designed to bypass human thought entirely isn’t innovation. It’s desecration. It’s why “3000 lines of code from a single prompt” lands like mockery instead of progress. Years of learning, debugging, late nights, breakthroughs—flattened into a party trick for people who never respected the craft in the first place.
Jim was reacting to that desecration. He felt it in his bones before he had words for it.
I was defending agency because I believe humans aren’t reducible to cost functions or productivity metrics.
We were both circling the same truth: something sacred is being mishandled, and most people don’t even notice because the language is all about efficiency and progress.
But here’s what I missed: you can’t defend human dignity by pretending the threat isn’t structural. And you can’t name the structural threat without offering people a way to act that isn’t just rage or resignation.
Jim gave the warning. I retorted with “buck up and do better.”
That was tone deaf.
What I Should Have Written the First Time
If I were writing my original response today, here’s what I’d add:
Jim is right that the incentives are misaligned. Power is consolidating at the top. The rhetoric of “democratization” often obscures who actually benefits—and it’s not the middle class. That’s not paranoia. That’s pattern recognition.
But here’s what I still believe: the moment you accept that story as inevitable, you’ve already lost. The only way to stay effective under that kind of pressure is to refuse the psychological surrender before the structural endgame arrives.
That’s not denial. That’s Daniel. He knew the empire wanted to erase him. He refused to internalize that erasure. The system didn’t save him. His faithfulness didn’t guarantee safety. But it preserved his integrity.
So yes: the structural forces are real. And yes: you still get to decide what you won’t trade away. Both are true.
What This Means for How I Build Now
I’m still an AI enthusiast. I’m still building with these tools. I still believe in their potential.
But Jim’s warning clarified something I was avoiding:
Not all AI development is equal. Some of it elevates human cognition. Some of it erases it.
I can’t just build “cool stuff” and assume it’s neutral. I have to ask:
- Does this tool amplify human judgment or replace it?
- Does this system treat people as cost centers or as collaborators?
- Am I building something that makes humans more capable, or just more disposable?
Those aren’t abstract questions anymore. They’re design constraints.
Because if sacred humanism is real—if human dignity actually matters—then I don’t get to hide behind “I’m just building tools.”
Tools shape what we think is possible. They shape what we think people are for.
And I’m responsible for that, whether I wanted to be or not.
What I Owe Jim
I owe him an acknowledgment: he wasn’t catastrophizing. He was naming the threat clearly so we don’t sleepwalk into it.
I owe him the admission that my response, while psychologically sound, was structurally incomplete.
And I owe him this: I’ll take the long-term trajectory seriously, not just the present-day tactics.
Because he’s right that if we don’t push back—collectively, structurally, not just individually—the endgame is oligarchic control where only those at the top retain agency.
But I also owe myself this: I won’t accept that endgame as inevitable.
I’ll name it. I’ll resist it. I’ll build differently because of it.
But I won’t let it define my sense of worth before it even arrives.
The Real Test—And What Faithfulness Looks Like
AI agents aren’t the test. They’re just the current manifestation.
The test is whether you’ll let external forces—market incentives, executive rhetoric, technological determinism—define what you’re worth. Whether you’ll accommodate until there’s nothing left to trade. Whether you’ll panic and react, or decide in advance what matters and act on it.
Jim’s warning is this: power is consolidating at the top, and the people with control don’t want to share it.
My response is this: then make them work for it. Don’t internalize disposability. Don’t surrender psychologically before the fight is even over.
The space between those two—the tension between naming the threat and refusing to be defined by it—that’s where faithfulness lives. That’s where Daniel lived.
So here’s what I’m taking from this:
Learn the system deeply. Not to capitulate, but to understand where the pressure points are. Where agency still matters. Where you can push back effectively.
Excel strategically. Not to be “indispensable” in some naive way, but to build leverage that lets you refuse without losing everything immediately.
Refuse deliberately. Not reactively. Not performatively. But with clear boundaries about what you will not trade away: judgment, authorship, moral responsibility.
Name the threat clearly. Jim’s right: if we don’t call it what it is, we’ll rationalize ourselves into compliance. Consolidation of power. Elimination of cognition. Structural betrayal. Say it plainly.
Act anyway. Not because success is guaranteed. But because faithfulness matters more than safety. Because sacred humanism demands it. Because you refuse to be psychologically conquered.
Build differently. Systems that elevate human cognition, not erase it. Tools that amplify judgment, not replace it. Technology that dignifies people, not treats them as disposable.
That’s not naive. That’s not denial. That’s not fatalism.
That’s standing faithful in Babylon.
If you’re reading this and you feel the tension Jim described—the sense that something sacred is being trampled, that the people in power don’t actually care about you, that this “revolution” isn’t built for your flourishing—you’re not crazy. You’re not a victim. And you’re not powerless.
But you are under pressure. Real, structural, intentional pressure.
Daniel faced worse. He stayed faithful. Not because it was safe. But because his integrity mattered more than the king’s approval.
Jim’s warning is real. And your agency is real.
Hold both. That’s how you stay faithful in the age of AI.
