ゲルマンの健忘症の罠

#Tech

ゲルマンの健忘症の罠 AI時代の健忘症の罠

認知バイアスである「ゲルマンの健忘症効果」は、AIによって加速される。

専門家は、複雑な問題解決のためにAIを使い、多領域にわたるジェネラリストを目指す傾向にあるが、ここに危険が潜んでいる。

馴染みの分野の誤りは見抜けるが、AIが生成した未知の分野の情報に対しては、評価する能力を失ってしまうためだ。

AIの回答を単に『アウトソーシング』として受け入れるのではなく、真の『学習』ツールとして活用するかが重要となる。

無能なままの「自信」を構築することは、知らぬことよりも危険であると警告されている。

原文の冒頭を表示(英語・3段落のみ)

Michael Crichton once coined a cognitive bias he called the Gell-Mann Amnesia effect, named after his friend, the physicist Murray Gell-Mann. You open a newspaper and read an article about something you know well—in Gell-Mann’s case, physics; in Crichton’s, medicine or Hollywood (or dinosaurs). You notice the errors immediately. The journalist misunderstands the basics, reverses cause and effect, gets the context wrong. You shake your head.Then you turn the page to some subject you don’t know, and suddenly the same publication reads as credible. You trust it. You forget what you just learned about the source.That’s the amnesia.AI triggers the same bias, only faster. Ask a model about topics you know deeply and you catch the tells: subtle errors, confident-but-wrong framings, missing assumptions, hand-wavy nuance. Prompt it again about something outside your expertise and it suddenly feels brilliant—clear, comprehensive, authoritative.But the AI didn’t get smarter. You just lost the ability to evaluate it.ShareThe idea of the AI-enhanced generalist is seductive—but the seduction isn’t really about AI. It’s about a professional frustration that already exists. Modern careers reward narrow specialization, but the most interesting problems cut across domains. Many professionals feel pigeonholed, constrained to their silo, forced to hand off decisions to other specialists when they’d rather engage directly. The dream is the professional renaissance figure: someone who can exercise real judgment across diverse fields, not just route questions to the appropriate expert.AI seems to offer a path there. Not necessarily by teaching you adjacent fields, but by letting you operate in them. A product manager can engage with engineering tradeoffs without fully understanding the code. A lawyer can pressure-test a financial model without having built one from scratch. A developer can make design decisions informed by principles they’d struggle to articulate on their own. You outsource the technical depth to the model and trust it to surface what matters—the high-level judgment calls that anyone with good instincts should be able to make.David Epstein argued in Range that generalists thrive in “wicked” environments—domains where rules are unclear, patterns aren’t obvious, and feedback is delayed or ambiguous. That describes most interesting professional problems. The specialist excels in “kind” environments with clear feedback loops and repeating patterns. But the messiest, highest-value problems usually require synthesizing insights across domains, seeing connections that specialists miss.AI seems like it should supercharge this kind of generalist advantage. All that adjacent knowledge, suddenly accessible. All those silos, suddenly permeable. LinkedIn is already restructuring around this premise. The company has said it’s discontinuing its Associate Product Manager program—one of Silicon Valley’s most familiar early-career tracks—and replacing it with something called “Associate Product Builder.” The new program trains people to code, design, and product manage.LinkedIn’s CPO Tomer Cohen described the broader shift on Lenny’s Podcast: instead of large teams split by function, the company has reorganized around small pods of cross-trained builders. It’s less about an engineer, designer, and PM coordinating through handoffs, and more about individuals who can flex across domains. The emphasis, as he framed it, is on developing the human strengths—vision, empathy, communication, creativity, judgment—while automating more of the routine work that used to live inside specialized roles.But here’s the problem with the AI-enhanced generalist: the Gell-Mann Amnesia effect scales with ambition.The more domains you try to operate in, the more you’re relying on AI in areas you can’t evaluate. Consider vibe coding—the practice of building software by describing what you want in natural language and letting AI generate the code. Professional engineers have been warning for months that AI-generated code is often subtly broken: security holes, unmaintainable architecture, hidden logic bugs. Swyx, an influential AI engineer, declared it dead in October 2025, tweeting “RIP Vibe Coding Feb 2025 - Oct 2025”—just eight months after Andrej Karpathy invented the term. The complaint from engineers: non-technical workers would vibe-code a prototype in an hour, then toss it over the wall expecting a production app by Friday, not realizing they’d only painted a superficial picture missing all the hard parts. When LLMs hit dead ends, the people who generated the code couldn’t debug it—they didn’t understand what they’d built.That’s the Gell-Mann Amnesia trap in miniature. The engineers could see the problems. The vibe coders couldn’t.The implication is uncomfortable: AI appears most helpful precisely where you’re least equipped to evaluate it. That confident, well-structured advice about the app you’re coding, your personal finances, or your startup’s legal issues might be just as flawed as the response in your own field—you just can’t tell.So what exactly are people building when they use AI to expand into new domains? Genuine competence? Or a convincing illusion of competence that will collapse the moment it encounters real stakes?There’s a distinction that may be relevant here: using AI to get answers versus using AI to actually learn. Outsourcing versus upskilling—something I’ve explored previously a bit here and here.In the first mode, you’re essentially hiring a consultant you can’t vet. The AI gives you the answer, you use the answer, and if it’s subtly wrong, you’ll never know until something breaks. You haven’t developed judgment—you’ve just acquired a dependency.In the second mode, you’re using AI as a learning accelerator. You’re building real competence, developing the judgment to evaluate AI output, eventually reaching the point where you can operate independently in that domain. The AI is a bridge to capability, not a permanent crutch.This distinction feels clean. The problem is that from the inside, these two modes are almost impossible to tell apart.When you’re learning a new domain with AI assistance, the experience of outsourcing and the experience of genuine learning can feel remarkably similar. Both involve asking questions, getting clear explanations, and coming away feeling like you understand something you didn’t before. The Gell-Mann Amnesia effect means you can’t reliably tell whether that understanding is real or illusory—and “I’m getting better results” might just mean you’re getting better at driving the tool, not that you’re building durable understanding.You might think you’re building competence when you’re actually just building confidence. And confidence without competence is arguably worse than knowing nothing at all. At least ignorance is honest about its limitations.There’s a deeper issue here worth naming.Maybe the distinction between outsourcing and learning was always somewhat artificial, and AI just makes this obvious. Human experts have always operated with large zones of confident ignorance, relying on trusted sources in domains they can’t personally evaluate. Doctors trust pharmacologists. Lawyers trust accountants. Executives trust analysts. Professional life has always involved strategic outsourcing of judgment.What AI changes is the scale and the seamlessness. Previously, outsourcing to human experts came with natural friction—you had to find the expert, pay them, explain your problem, interpret their advice. That friction created a kind of awareness that you were operating outside your competence. AI removes the friction entirely. You can get confident-sounding answers in any domain, instantly, without the social cues that would normally remind you of your limitations. Worse, you can ask endless follow-up questions to an infinitely patient tutor, each answer reinforcing the sense that you’re being thorough and making informed decisions—all without ever gaining the ability to evaluate whether any of it is right.Maybe this is fine. Maybe the professional world was always held together by strategic trust in specialized knowledge, and AI is just a more efficient version of the same thing.Or maybe the friction was doing important work—keeping people appropriately humble about the limits of their knowledge, forcing them to build real relationships with experts whose judgment you could calibrate over time. Maybe the seamlessness is precisely the problem.I don’t think there’s a clean resolution to this. AI genuinely expands the surface area of what a single person can attempt—and it also makes it easier to mistake fluency for understanding. The two effects arrive together, and they’re hard to separate because the feedback you’d need to separate them is often delayed, ambiguous, or missing.Two caveats worth noting. First, this calculus isn’t static. Models keep improving—GPT-5.2, released last week, reportedly beats or ties industry professionals on 70.9% of well-specified knowledge work tasks across dozens of occupations. As AI gets more reliably competent, the case for trusting it in unfamiliar domains gets stronger. But reliability isn’t transparency. Even if a model is right 95% of the time, the 5% failure is still silent to the non-expert—you won’t know which answers fall in that bucket. The Gell-Mann Amnesia trap doesn’t disappear as models improve; it just becomes easier to dismiss.Second, competence isn’t binary. The more you work within a domain—prompting, validating outputs, doing independent research, iterating on problems—the more you’re likely building some genuine understanding, even if you started with none. If you’re aware of the trap and actively working to verify what you’re learning, you may gradually cross into real competence. The danger isn’t using AI to enter unfamiliar territory; it’s mistaking the confidence that comes from fluent interaction for the competence that comes from actual understanding.That’s the warning, more than the conclusion: if AI feels most magical right when you’re least able to evaluate it, then the feeling of “I’m getting smarter” is not reliable evidence that you are.The horizontal generalist who can genuinely operate across domains—not just access information across domains—seems like an increasingly valuable archetype. LinkedIn is betting on it. But the path there runs directly through the Gell-Mann Amnesia trap. Maybe the right posture is simply to treat AI-assisted expansion as inherently probabilistic—sometimes real, sometimes illusory—and to resist the urge to narrate your own competence into existence.The Gell-Mann Amnesia trap isn’t just a bug in AI. It’s a bug in us, made scalable.ShareNo posts

※ 著作権に配慮し、引用は冒頭3段落までです。続きは元記事をご覧ください。

元記事を読む ↗