I was coding across two different codebases while the Minnesota situation occupied my brain. Cognitive dissonance: historical conservative voter watching behavior I can’t support. Social media amplifying the distress. External turmoil consuming cognitive bandwidth I needed for context tracking.

My prompts became shorter. I stopped anticipating agent mistakes. When it went down a path I should have prevented with better guardrails, my immediate thought: “The agent is being stupid.”

The agent wasn’t stupid. I was giving inadequate direction and blaming the tool.

This is the hidden cost of AI collaboration during external turmoil. Not dramatic failure (system crashes, data loss, catastrophic bugs). The insidious kind: you don’t notice your work degrading until the damage is done.

The mechanism is the same regardless of source: political unrest, health crises, economic stress, family trauma, collective grief. External turmoil consumes cognitive capacity. When you’re collaborating with AI, that degradation compounds in ways that are hard to see and harder to recover from.

Current AI systems assume you’ll recognize when you’re compromised. That you’ll catch degraded outputs, preserve important context, notice your own mistakes. But when you’re emotionally dysregulated, the capacity you need to recognize the problem is the capacity that’s already failing.

This isn’t personal weakness. It’s how human cognition works under stress. If we don’t design AI systems that account for it, we’re building tools that compound the problem exactly when people need help most.


The Pattern

The Cascade

My tone became surly. The AI responded with mechanical apologies and pattern-matching that felt like conversation with an incompetent colleague. Frustration loop intensified.

Iterations exploded. Tasks that normally took two or three exchanges took ten. I felt productive: “so much activity!” but it was just rework. Quality became uncertain. I suspect I accepted lower quality without realizing it.

I could see some problems coming. AI context compactions would lose my work. I knew it was happening. But I couldn’t identify what mattered, preserve it, verify it worked. I watched the car crash in slow motion, unable to prevent it.

This is what emotional dysregulation looks like in AI collaboration. Not dramatic failure, but subtle, compounding degradation.

What Research Shows

These patterns aren’t unique. They’re predictable consequences of stress on human cognition.

Cognitive load theory1: Working memory is limited and degrades under stress. When a task is already demanding (managing context across two codebases) adding external stressors pushes you past threshold. Detailed prompts become short, reactive ones. You stop seeing second-order consequences.

Decision fatigue: Quality degrades with repeated choices. Under stress, you either accept the first option without evaluation (exhaustion) or reject everything obsessively (perfectionistic spiral). The calibrated “good enough” disappears. Pandemic research confirmed this at population scale: stress directly impaired decision-making, with effects accumulating over time and persisting after stressors ended.2

Stress response3: Stress narrows attention, impairs working memory, makes everything feel urgent. I stopped referencing earlier conversation. Each iteration felt like starting over. Context collapsed. This isn’t just psychological: the prefrontal cortex, which governs reasoning, decision-making, and context maintenance, is remarkably vulnerable to stress. Under sustained pressure, prefrontal function physically degrades.4

Metacognitive failure5: Your ability to monitor your own thinking degrades alongside the thinking itself. You can’t reliably know that you don’t know. This is why I blamed the AI instead of recognizing my inadequate prompts. The capacity needed to identify “I’m the problem” was the capacity stress had consumed.

Here’s the troubling paradox: interventions that increase metacognitive awareness (which should improve feedback quality) also increase cognitive load and anxiety, potentially overwhelming users already at capacity.6

Observable Markers

These aren’t just feelings. They’re behavioral patterns consistent in HCI research:

DomainDegraded Pattern
PromptsShorter, missing guardrails, directive (“just do this”), emotional (“this sucks”), repetitive context
IterationsRapid revision without implementation, accepting first output without review, rejecting obsessively, rework instead of progress
ContextNot referencing earlier work, starting over, forgetting which context you’re in, explicitly telling AI what it should know
DecisionsLost “good enough” calibration, binary thinking, blaming or over-trusting the tool, can’t articulate what’s wrong

These aren’t signs of incompetence. They’re signs of a cognitive system operating past capacity.


The Gap

Traditional version control assumes you recognize mistakes and choose to revert. The system trusts your metacognition.

That assumption breaks under emotional stress.

Self-monitoring degrades alongside performance.5 When working memory is overwhelmed, you lose not just clear thinking, but recognition that you’re not thinking clearly. Confidence judgments are based on ease of mental processes, not accuracy. Under load, you feel confident in incorrect judgments because impaired thinking generates fewer contradictions to notice.7

This creates recognition lag. Hours, sometimes days, between onset of degraded performance and awareness of it.8 By then, context is fragmented. Decisions made without foundation. Work products reflect a cognitive state you can no longer evaluate.

Current AI systems make this worse. Even when stable, agents struggle to maintain context parity (tracking context as well as you do). You explicitly tell the AI what to remember. “We’re working on Project A.” “As I mentioned earlier…” “Remember this decision…”

This burden is manageable when cognitively stable. When dysregulated, that capacity disappears. You know the compaction will lose your state. You see it coming. But you can’t identify what needs protecting, execute the preservation, verify it worked. You watch predictable failure happen.

Recent research reveals an additional problem: human oversight of AI under stress introduces new failure modes. Cognitively loaded humans exhibit attentional tunneling and trust calibration errors. These are emergent vulnerabilities that wouldn’t occur with humans or AI operating independently.9

The system requires you to be most reliable at the moment you’re least capable of it.


The Concept

Git provides version control for code. We need something analogous for human-AI collaboration under stress, but the metaphor only goes so far. Traditional version control assumes you recognize mistakes. Emotional version control must account for recognition lag: you won’t know you need recovery until hours or days later.

What would help isn’t intervention in the moment. When I was spiraling, no AI prompting would have pulled me out. I “knew better than the AI.” Any challenge would have intensified frustration.

What would help: silent preservation with deferred recovery.

Architecture

AI systems detect behavioral stress markers (iteration velocity spikes, tone shifts, context fragmentation) and silently shift into preservation mode. No alerts. No “I notice you’re stressed.” Just comprehensive documentation for later review.

The system captures:

  • Full conversation context
  • Decision points and rationale
  • Outputs generated but not used
  • Behavioral pattern data
  • Session metadata

It doesn’t try to stop you. It doesn’t judge. It preserves everything so you can recover your work once you’ve recovered yourself.

Recovery

New session, system flags preserved states: “You have captured context from yesterday. Behavioral patterns suggested high cognitive load. Review before continuing?”

Not mandatory. Not judgmental. Just: “This exists if you need it.”

Review options:

  1. Summary: What you worked on, decisions made, where quality shifted
  2. Decision review: Each major choice explained, opportunity to investigate reasoning
  3. Recovery options: Keep as-is, rollback, extract useful parts
  4. Forward path: What needs work, what needs revisiting

This isn’t about preventing degradation. You can’t always prevent external turmoil affecting work. It’s about limiting blast radius. The degraded session happened. It doesn’t have to poison everything after.

Technical Requirements

Behavioral pattern recognition: HCI research on stress detection covers keystroke dynamics, interaction velocity, language shifts.10 Not perfect, but sufficient to trigger preservation. “Behavioral proxies” can effectively detect cognitive state.11

Autonomous context preservation: Can’t rely on you to save context. Must preserve comprehensively by default. Research on high-stress contexts emphasizes cognitive load safeguards that help humans remain effective when overwhelmed.12

Context parity maintenance: AI must track context better than you do when compromised. Understand what’s significant, maintain coherence across sessions, surface relevant context without being asked.

Recovery interfaces: Not passive logs. Interactive analysis helping you make sense of what happened. Interpretable systems reduce cognitive burden by providing clear guidelines for evaluation.13

Privacy by design: This data lives on your machine. It’s yours. Work outcomes might be judged by leadership. Emotional states aren’t their business.

Philosophical Shift

This assumes something uncomfortable: you can’t always be effective, and that’s okay.

Productivity culture says regulate your emotions, maintain quality regardless of circumstances, catch your own mistakes. That’s not how cognition works under stress. Demanding perfect self-regulation is like demanding more working memory. You can’t will it into existence.

Build systems that work with human reality. Systems that don’t require you to be most reliable when least capable. Systems that preserve ability to recover when you can’t prevent degradation.

From “don’t make mistakes” to “mistakes are survivable if we preserve the path back.”

One caveat: AI systems that make work effortless may foster dependency, the “comfort-growth paradox.”14 User-friendly AI can inadvertently create intellectual stagnation by eliminating cognitive friction necessary for development. Emotional version control must provide support during genuine overload without removing productive struggle that builds capability.

What’s Missing

I’m describing principles, not solutions. When does preservation mode trigger? What algorithms detect stress reliably? How do team contexts work without creating stigma?

I don’t have those answers. I’m processing what happened to me and what might have helped. Maybe you’re processing something similar. Maybe you’re building systems and see where this applies.

This isn’t product specification. It’s invitation to solve a problem most AI systems don’t acknowledge exists.


Why the Humanities Aren’t Optional

Alex Karp, CEO of Palantir, made headlines arguing humanities degrees are worthless in the AI era. Go learn to code. Focus on business, technical skills, what produces value.

He’s completely wrong—and this article is the proof.

Everything I described (degradation cascade, metacognitive failure, recognition lag, preservation systems) none of it makes sense without understanding human psychology, cognitive science, how stress affects capacity to think and work.

You can’t code your way around the human condition.

What You Need

DisciplineWhat It Explains
PsychologyWhy prompts degraded. Cognitive load theory isn’t a “soft skill.” It’s infrastructure for understanding working memory limits.
NeuroscienceAttribution error. Why I blamed AI instead of recognizing inadequate prompts. How stress narrows attention and impairs metacognition.
Cognitive scienceRecognition lag. Self-awareness degrades alongside performance. Awareness failure isn’t personal weakness. It’s predictable.
SociologyThe external stressor itself. Political turmoil, collective grief: contextual forces affecting how people work.
PhilosophyThe uncomfortable questions. What counts as authentic vs compromised decision-making? Where’s the boundary between support and control?
HCI researchObservable patterns. Behavioral markers when humans are stressed. Applied humanities: knowledge about behavior made operational.

The Contrast

Engineer without humanities knowledge experiences the cascade and:

  • Blames tools (“AI isn’t working today”)
  • Blames themselves generically (“I’m not good at this”)
  • Doesn’t understand the mechanism
  • Designs for stable, rational users
  • Builds tools that fail silently when users need them most

Engineer with humanities knowledge experiences the same cascade and:

  • Recognizes cognitive load theory in action
  • Sees metacognitive breakdown as predictable, not personal
  • Designs systems that work with human reality
  • Builds preservation mechanisms that don’t require metacognition
  • Creates tools that limit blast radius instead of demanding perfection

That’s not a minor difference. That’s the difference between systems that hurt and systems that help.

Operational Refutation

I’m not arguing humanities are valuable in the abstract. I’m demonstrating they’re essential for doing technical work effectively.

You can’t design emotional version control without understanding how emotions affect cognition. You can’t build context preservation without understanding how working memory fails. You can’t navigate privacy questions without philosophical reasoning about autonomy.

Karp was focused on business. But if your business builds AI systems that fail exactly when users need them most, you don’t have a business. You have a liability.


The Invitation

I’m still figuring this out. This article is part of my recovery process: processing patterns I couldn’t see while they were happening. I’m not prescribing solutions. I’m describing a problem that needs better solutions.

Open Questions

Stress detection without being patronizing? Nobody wants their computer telling them they’re emotional. The line between support and surveillance is blurry and probably personal.

Validated markers predicting degraded collaboration? Individual variation is huge. Iteration velocity might spike from stress or productive flow. How do you distinguish signal from noise?

What should the system do when it detects dysregulation? I proposed silent preservation. Maybe that’s wrong. Maybe some want real-time nudges. Maybe team contexts need different approaches.

Rollback without losing valuable work? Not everything under stress is bad. Sometimes pressure forces creative solutions. Fine-grained “keep this, discard that” analysis requires metacognitive capacity you don’t have when compromised.

How much should systems protect users from themselves? You can’t reliably self-regulate under high load. But systems that override agency create different problems.

Team contexts? If three people collaborate and one is dysregulated, does the system preserve for everyone? Do teammates know? Does that create stigma?

Who owns the emotional state data? I said privacy by design. In practice, corporations want observability. How do you build systems that resist institutional capture of intimate data?

These aren’t rhetorical. They’re genuine gaps between “this should exist” and “here’s how to build it.”

The Foundation

This article focused on what happens when turmoil hits. But there’s a deeper problem: even when stable, AI agents struggle to maintain context parity. They lose track of what you’re working on, forget decisions, require explicit reminders.

If AI can’t maintain context when everything’s fine, it can’t compensate when you’re compromised. Context parity is the foundation. Emotional version control builds on it.

I’m working on that piece next.

What I’m Asking

If this resonated, I’d like to hear why. What patterns matched your experience? What did I miss?

If you’re building AI systems, what would change if you designed for degraded states instead of assuming stable users?

If you’re a researcher, what does the literature say? What am I reinventing? What’s unexplored?

If you’ve experienced this cascade (degraded output, recognition lag, recovery challenges) you’re not alone. External turmoil isn’t going away. Political stress, economic precarity, health crises, climate anxiety: contextual realities more people face more often.

We can keep building systems that demand perfect self-regulation, or we can build systems that work with human reality. Systems that preserve the path back when you can’t prevent the fall forward.

I don’t have all the answers. But the question is worth asking.


References


  1. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285. ↩︎

  2. Tarantino et al. (2021). Impact of Perceived Stress and Immune Status on Decision-Making Abilities during COVID-19 Pandemic Lockdown. Behavioral Sciences, 11(12), 167; Zivi et al. (2023). Decision-Making and Risk-Propensity Changes during and after the COVID-19 Pandemic Lockdown. Brain Sciences, 13(5), 793. ↩︎

  3. Sapolsky, R. M. (2004). Why Zebras Don’t Get Ulcers (3rd ed.). Holt Paperbacks. ↩︎

  4. Arnsten, A. F. T., & Shanafelt, T. (2021). Physician Distress and Burnout: The Neurobiological Perspective. Mayo Clinic Proceedings, 96(3), 763-769. ↩︎

  5. Koriat, A. (2012). The self-consistency model of subjective confidence. Psychological Review, 119(1), 80-113. ↩︎ ↩︎

  6. Chen et al. (2025). Can theory-driven learning analytics dashboard enhance human-AI collaboration in writing learning? arXiv:2506.19364↩︎

  7. Cau, F. M., & Spano, L. D. (2025). Exploring the Impact of Explainable AI and Cognitive Capabilities on Users’ Decisions. arXiv:2505.01192↩︎

  8. Fiorenzato, E., & Cona, G. (2022). One-year into COVID-19 pandemic: Decision-making and mental-health outcomes. Journal of Affective Disorders, 309, 418-427. ↩︎

  9. Tang et al. (2025). Dark Patterns Meet GUI Agents: LLM Agent Susceptibility to Manipulative Interfaces and the Role of Human Oversight. arXiv:2509.10723↩︎

  10. Hernandez et al. (2014). Under pressure: sensing stress of computer users. SIGCHI Conference on Human Factors in Computing Systems, 51-60. ↩︎

  11. Scibelli et al. (2024). Designing The Internet of Agents: A Framework for Trustworthy, Transparent, and Collaborative Human-Agent Interaction. arXiv:2512.11979↩︎

  12. Pak, H., & Mostafavi, A. (2025). Situational Awareness as the Imperative Capability for Disaster Resilience. arXiv:2508.16669↩︎

  13. Sung et al. (2025). VeriLA: A Human-Centered Evaluation Framework for Interpretable Verification of LLM Agent Failures. arXiv:2503.12651↩︎

  14. Riva, G. (2025). The Architecture of Cognitive Amplification: Enhanced Cognitive Scaffolding as a Resolution to the Comfort-Growth Paradox. arXiv:2507.19483↩︎