Executive Summary
Human feedback transforms us—it shapes identity, carries relational memory, triggers emotion, and integrates over time. Freud called this the superego; Maslow showed how the same feedback lands differently depending on where we stand psychologically. AI has none of this architecture. It has weights, not a superego. It optimizes but doesn't defend. This difference is everything for designing agentic AI systems.
The article argues we need the humanities (psychology, sociology, philosophy, anthropology, linguistics) to build better AI—not in a vague way, but precisely and operationally. The "market" for authentically human things must precede the wave of AI-generated creativity, or we lose the ability to tell the difference.
Practical prescriptions: stop trying to make AI more human, treat feedback as a design problem, develop internalization literacy, question the "iterate faster" religion, and hire for humanities.
In my last post, I argued that maximizing AI effectiveness requires seeing the difference between human internalization and AI internalization—and that the humanities are the infrastructure for understanding what we bring to the table.
This post goes deeper: what does that difference actually look like? And what does it mean for practitioners?
Human Internalization vs. AI Internalization
Human feedback triggers something profound with no parallel in AI.
Human internalization:
Identity is involved. Feedback affects how we see ourselves. Am I someone who makes mistakes? Or learns? That distinction shapes everything.
Freud called this the superego—the internalization of external standards, judgments, expectations. We absorb feedback. It becomes part of our architecture.
Every piece of feedback is evaluated against this internalized sense of self. Does it align with who I am? Threaten who I was? Expand my possibility? The AI has no such architecture. It has weights, not a superego. It optimizes but doesn't defend.
Memory is relational. We remember who gave feedback, when, why, the relationship. Feedback from a trusted mentor carries different weight than from a stranger. Same words land differently.
Meaning-making happens. We ask: What does this mean for me? For the other person? For my path? Feedback is raw material; meaning is the product.
Transformation is possible. One piece of profound feedback changes trajectory. The person isn't the same before and after. Qualitative shift, not just quantitative.
Maslow: feedback that threatens safety sends us scrambling for security; feedback that supports esteem builds toward self-actualization. Same words land differently depending on where we stand.
This matters for AI design. When we build feedback systems, we assume a receiver at a particular psychological level. Seeking safety? Belonging? Esteem? Self-actualization?
The AI has no hierarchy to climb. It doesn't need security before esteem feedback. It just processes. That difference is everything.
Rejection is an option. Humans can refuse, reframe, dismiss feedback. Agency exists in how feedback is processed.
Time integrates it. Humans process feedback over days, weeks, years. Same feedback at 25 means something different at 40. Time adds depth and perspective.
Emotion is inseparable. Feedback triggers pride, shame, gratitude, defensiveness, hope. These emotions shape integration—and whether it lands at all.
AI internalization:
Identity is absent. No ego, no self-concept. Feedback just updates weights. No one home.
Memory is optional and mechanical. Context windows for short-term, vector databases for retrieval. Neither parallels rich, emotional, contextual human memory.
No meaning-making occurs. The AI doesn't ask "what does this mean?" It processes patterns, optimizes toward objectives. It never wonders why.
Updates are statistical. A "learning" event changes probability distributions. The system isn't transformed—recalibrated. Different, not changed in the human sense.
Rejection is engineered. Unless we build refusal mechanisms, the AI absorbs what's given. No equivalent of stubbornness unless programmed.
Time is compressed. Iterations can happen in hours. Quality of "learning" may differ from human learning—not just speed.
Emotion is simulated. We can prompt the AI to acknowledge emotion, but no felt experience underlies it. No butterflies, no lump in the throat.
Why This Matters for Agentic AI
As AI agents operate autonomously, the feedback loop becomes the critical design problem.
Who provides feedback? Human overseers? Other agents? The environment?
How often? Continuous? Episodic? On-demand?
What's signal vs. noise? Humans give messy, contradictory, emotional feedback. How do we structure it for machines without losing what makes it valuable?
Can agents self-correct? Humans develop wisdom. Agents need uncertainty quantification—and even that is engineered, not earned.
What's the role of relationship? In human feedback, who says it matters. In AI feedback, does it?
The answers don't come from more compute. They come from understanding what feedback actually is—and that understanding comes from the humanities, not computer science alone.
The Paradox Resolved
We need the humanities to build better AI.
Not in a vague "human-centered design" way. In a precise, operational way:
- Psychology teaches us how humans actually respond to feedback—not how we assume they do. What makes feedback land? What makes it bounce off?
- Sociology shows how feedback functions in groups and systems. How do organizations learn? How do cultures shape what feedback means?
- Philosophy asks what we mean by "understanding" and "learning." When we say an AI "learned," are we using the word correctly?
- Anthropology reveals cultural variation in how feedback is given and received. What's polite in one context is rude in another. AI systems built on Western feedback norms fail elsewhere.
- Linguistics illuminates how context shapes meaning. The same words mean different things depending on who's speaking, when, and to whom.
These aren't soft skills. They're the study of what makes human feedback irreplaceable.
The harder truth: The "market" for authentically human things has to precede the tidal wave of AI-generated human-like creativity.
If we don't get the foundation right first—if we haven't established what makes human contribution valuable, visible, and distinguishable—the whole structure collapses into beautiful shards of what could have been.
We're building systems that mimic human creativity at scale. Before floodgates open, we need a market that can tell the difference. We need audiences, employers, and institutions that value authentic human expression as a distinct category—not as a nostalgic afterthought.
Otherwise, "human-made" becomes meaningless. Everything looks like everything else. The organic industry didn't succeed because people suddenly preferred natural—it succeeded because enough people already understood the difference.
We need that understanding to be widespread before the wave.
And that starts with understanding feedback, internalization, and what makes us human—deeply enough that we recognize it when we see it, and build systems that amplify rather than drown it out.
What This Means for Practitioners
If you've made it this far: do you actually agree with this analysis?
If not, I'd want to hear why. If yes, here's where it gets uncomfortable.
Stop trying to make AI more human. The entire industry races toward more human-like interaction—more personality, more conversational warmth, more emotional intelligence. But if human internalization is what makes feedback transformative, and AI internalization is something else entirely, making AI "feel more human" might make it worse at its job. We're optimizing for a feeling while the mechanism stays mechanical. What if the goal wasn't to make AI feel like a colleague—but to make AI collaboration feel unlike a colleague, in ways that are actually productive?
Start treating feedback as a design problem. Most practitioners think about feedback as "what I say to the AI." That's wrong. Feedback is a system with inputs, transforms, and outputs. You're not writing prompts—you're architecting a learning loop. Ask different questions: What's the feedback signal, and what's noise? How does the system know it's improving? Where does the feedback loop break? What human judgment is required at each stage? You're not a prompt engineer. You're a feedback architect.
Develop internalization literacy. Before you can give good feedback to AI, you need to understand how you receive feedback. When someone criticizes your work, what happens in your body? In your thinking? How do you decide whether to integrate it or reject it? That's uncomfortable to examine—most of us avoid it. But that psychology, the messy, defensive, hopeful, terrifying process of human feedback, is the map for designing AI feedback systems. The better you understand your own internalization patterns, the better you design systems that work with human psychology rather than against it.
Question the "iterate faster" religion. Speed isn't always virtue. Human internalization is slow. If we're rushing past that to get more outputs, we might be optimizing the wrong thing. Some of the most valuable AI work might be fewer iterations with more deliberate feedback, slower cycles with deeper calibration, less throughput with higher signal. This is heterodox. It goes against "move fast and break things." But if the goal is human-AI collaboration that actually works, speed might be the wrong metric.
Hire for humanities, not just engineering. If feedback architecture is the critical skill, the people who understand feedback best—therapists, counselors, educators, coaches, human development specialists—might be more valuable than another engineer. This makes executives uncomfortable. It doesn't fit on org charts. But the thesis demands it: understanding human internalization is the scarce skill, and the humanities produce it.
These aren't comfortable prescriptions. They challenge how most teams work.
But if the difference between human and AI internalization is the key insight, these are the directions that follow.
I'm still sitting with questions I don't have answers to. How do we design feedback loops that capture "messy" human judgment rather than just clean metrics—without losing what makes human feedback valuable? What does it mean to "train" AI on human internalization patterns without losing what makes them unique? At what point does mimicking become diminishing? Can agentic AI ever develop something like wisdom from feedback, or is wisdom uniquely human—earned through the slow, emotional, identity-shaping process machines don't undergo? As AI becomes more sophisticated, which authentically human skills become more valuable? Which ones drown in the noise?
I'm not sure. Maybe you are. Maybe we're all figuring this out together.
We will only maximize AI effectiveness when we understand the difference between how humans and AI internalize feedback. That understanding comes not from more code. It comes from deeper study of what makes us human.
If any of this resonated—or challenged you—I'd love to hear from you.
Next post: practical frameworks for designing feedback loops that work with human psychology rather than against it.
