Everyone talks about feedback loops in AI—iterate, fine-tune, get signal into the system. But maximizing AI effectiveness requires something deeper: understanding the difference between how humans internalize feedback and how AI does. Our natural instincts for collaboration (built for human relationships) often mislead us when working with AI. We project understanding, agreement, and intent onto systems that simply process tokens. Meanwhile, Alex Karp argues humanities degrees are worthless in the AI era. The counter-view: AI will create demand for authentically human things, and the humanities are the infrastructure for understanding what we uniquely bring to the table.

Image Generation Prompt

A split composition: on the left, two humans in conversation with warm, layered, organic visual elements suggesting depth, memory, and emotion—perhaps overlapping translucent layers or neural patterns intertwined with human silhouettes. On the right, a clean, geometric AI interface or neural network visualization—precise, structured, mechanical. In the center where they meet, a subtle tension or gap—not connection, but contrast. Color palette: warm amber and ochre tones on the human side transitioning to cool blue-gray on the AI side. Style: modern editorial illustration, slightly abstract, no text. Mood: contemplative, thought-provoking.


Everyone is talking about feedback loops in AI. Iterate on prompts. Fine-tune with RLHF. Build evaluation datasets. Get signal into the system.

But here’s what most miss: maximizing AI effectiveness requires seeing the difference between human internalization and AI internalization.

That means understanding ourselves better.

Which means the humanities—psychology, sociology, philosophy, anthropology—aren’t obsolete. They’re the study of us, and why we matter.


I catch myself treating AI like a colleague. I’ve learned to give notes, receive criticism, calibrate trust. Those instincts are valuable. But they don’t directly map to AI.

I assume the AI “understands” context the way a teammate does. It doesn’t: it processes tokens. I read silence as agreement when it’s just absence of refusal. I expect the AI to “learn” from our conversation and carry it forward—it won’t, not unless I explicitly preserve the context. I read positive affirmation as encouraging when it’s just preprogrammed niceties. I forget the AI doesn’t have my interests in mind; it has objectives, and they’re not the same thing—especially if I haven’t been careful and explicit in matching my interests to its objectives. I project intent onto outputs that contain none—just prediction.

I give vague feedback and expect nuanced improvement. Humans read between lines. AI needs precision.

Some instincts still serve me: iteration improves outcomes, clear communication beats unclear, feedback should be specific, trust builds through reliability.

The gap between where instincts help and where they mislead is where AI effectiveness is won or lost. I have to notice myself slipping into human-to-human mode when I’m actually working with something fundamentally different.


Alex Karp, CEO of Palantir, made waves saying humanities degrees are worthless in the AI era. Go learn to code. The machines are coming for the poets.

A LinkedIn response from John Johnson caught my attention:

The real problem isn’t AI making humanities obsolete. Elite colleges abandoned “real” humanities for “human-cog training camp.” AI will create demand for authentically human things—like how GMOs accidentally created the massive organic industry.

We need to study humans more intensely because we’re building non-human systems designed to model human creativity and interact with humans almost as humans do.

The more we understand what makes human feedback transformative, the better we can design AI that complements us rather than replacing us.

Human feedback is transformative because of identity, memory, meaning-making, context, and relationship.

Karp saw AI making human skills obsolete. The deeper view: AI reveals how little we understood about human skills all along.

The humanities weren’t “nice to have.” They’re the infrastructure for understanding what we bring to the table.


If that reframe resonates, you have the core insight.

Next post: the deeper architecture—how human and AI internalization actually differ (Freud and Maslow have something to say), and what it means for practitioners building these systems.

If any of this resonated—or challenged you—I’d love to hear from you.