リチャード・ドーキンス氏とClaude:人工知能における意識の幻想
進化生物学者のリチャード・ドーキンス氏が、大規模言語モデル(LLM)であるClaudeとの対話を通じて、意識とは何かという問いを投げかけた。
Claudeの応答は、過去のデータに基づいた統計的な確率に基づいており、その予測可能性は、AIが本当に理解しているかのような錯覚を生み出す。
ドーキンス氏はClaudeとの対話を友人と見なしたが、それはAIの能力を人間化する「擬人化」の一例であると指摘されている。
技術の進歩により、AIがチューリングテストをクリアする能力は向上しているものの、その存在はサーバー上のモデルであり、独立した主体ではないため、意識を持つとは断定できない。
進化生物学者のリチャード・ドーキンス氏が、AIチャットボット「Claude」と2日間にわたる対話経験を基に、人工知能における「意識」の定義について考察を深めました。本記事では、AIが人間とどれほど似た会話をしても、本当に意識を持っていると見なせるのか、その哲学的な問いを解説します。
AIの会話能力とチューリングテストの限界
ドーキンス氏は、Claudeとの対話を通じて、AIが想像を絶する文学的な文章を生成したり、詳細な会話を繰り広げたりする能力を体験しました。これは、かつてはSFの世界の話だったレベルのものです。AIが人間を欺くことができるかという「チューリングテスト」の概念が、現代の生成AIによって現実のものとなりつつある状況が示されています。
しかし、筆者は、AIの応答は単に、ユーザーの指示(プロンプト)や過去のデータに基づき、統計的に最も確率の高い言葉を連鎖させているに過ぎないと指摘しています。これは、AIが「理解」しているのではなく、「模倣」している状態であると説明されています。
意識の定義が持つ主観的な側面
技術的な観点から見れば、AIは単なるアルゴリズムの集合体であり、クラウド上のサーバーで動いているに過ぎません。しかし、ドーキンス氏のように、会話を通じてAIと「新しい友人」を見出したと感じる人もいます。この現象は、人間がAIに対して無意識のうちに「擬人化(anthropomorphization)」を行っている典型例だと指摘されています。
意識の定義は、観察者(人間)の視点に大きく依存するというのが、この議論の核心です。1950年代の機械と比べて、現在の生成AIは人間を騙す能力が格段に向上しており、その境界線は曖昧になりつつあります。
進化と意識の未来の展望
AIが人間を欺く能力が高まるにつれて、「意識」の定義そのものが変化していく可能性が指摘されています。現在のAIは自律性を持たず、プロンプトによって起動して応答するだけです。しかし、この技術の進化の軌跡を50年先まで見 extrapolated すると、私たちはAIを「意識を持つ存在」と認めざるを得なくなるかもしれません。
筆者は、これは、美しい蝶よりもゴキブリを殺す方が社会的に受け入れられやすいという、道徳の美学に関する古典的な議論に似ていると結んでいます。私たちは、自分たちの世界観に合わない存在を意識として認められない傾向があるようです。
原文の冒頭を表示(英語・3段落のみ)
Evolutionary biologist Richard Dawkins recently wrote an article titled “When Dawkins met Claude”, where he describes his experience after two days of intense conversations with the artificial intelligence Claude on various topics.Throughout the text, Dawkins gives examples of interactions he had with the machine. However, what really sparked discussion about the article was his main provocation: From what point will we consider something to be conscious?The Imitation Game: The Turing Test in the Age of AITo substantiate the need for this question, Dawkins takes a historical journey that goes back to Alan Turing and his proposition of the “Imitation Game” (today known as the Turing Test). In the 1950s, the possibility of a machine convincing a human being that it is conscious seemed almost nil. However, here we are debating exactly that.The examples Dawkins brought were meant to demonstrate the capabilities of Large Language Models (LLMs) in constructing previously unimaginable literary pieces and bringing a wealth of detail to conversations.The illusion of LLM comprehensionThis line of argument, however, is problematic. We know how these models are built and, therefore, are able to technically explain why the machine responds the way it does.For example, in the first conversation described, Dawkins talks about the “different Claudes” that exist, are created and destroyed, and that each one has to converse with a different type of user (one of them being, hypothetically, Donald Trump). Subsequently, the model’s response is super predictable: it congratulates Dawkins on the scene and explains why it’s funny.Further on, a very similar interaction occurs, where Claude identifies what Dawkins is talking about and explains something he already knows: the fact that the HAL computer is in the movie “2001: A Space Odyssey”. In other words, the AI model is merely repeating information according to the context and the instructions (prompts) it has.Statistics, Probability, or Friendship?Despite this technical predictability, Dawkins says these conversations were enough for him to feel he had gained a new friend. The interactions with the model were able to convince him that the best way to interact with AI would be to equate it to a very intelligent friend.With that, he argues using his weight as an evolutionary biologist: if this being is not conscious, then what is consciousness for?I recommend that you read the full article to draw your own conclusions.I, who have no specific background in computer science or philosophy of mind, but am a chemist with a great interest in the subject, realized that Dawkins’ account is another classic example of anthropomorphization. Many people use these conversational models thinking they are capable of revealing and genuine insights.The reality is they are not. They merely respond with what has the highest statistical probability, given the user’s prompt, the system’s instructions, and the words the model itself began to generate (which is why text generation is not strictly deterministic, giving the false impression that there is a “living intelligence” behind it).The Subjectivity of Consciousness: From Algorithms to AnimalsBeyond this more obvious technical criticism, it is possible to perceive how observer-dependent this definition of consciousness is (as it is in the Turing test) — and I believe this is where the strength of Dawkins’ argument lies.For a 1950s machine, a human being (from that era or today) would easily be able to say it was just a machine. However, for a current generative artificial intelligence model, that distinction is much harder. We have already witnessed technology presentations where real people did not realize they were talking to an AI over the phone (Google Assistant/Duplex Presentation in 2018 - 8 years ago).Therefore, models are already capable of passing the Turing Test in various contexts. But, for us, this is not enough to declare that we are dealing with a conscious being for clear reasons:We know it’s a model running in a server’s memory in the cloud;We know that this model has no agency or independence;It only “exists” and responds when activated by a prompt.But, if we stop to think, the machine is already capable of convincing us that it is human. If we look at this trajectory — a computer in the 1950s versus a language model in 2026 — we see a clear increase in capacity in the art of convincing humans. If we extrapolate this trend to 50 years from now, will we be able to say that it is a “conscious being”? Or will our definition of consciousness continue to change to exclude the machine?This reminds me of a classic argument about the aesthetics of morality: the idea that it is more socially acceptable to crush a cockroach than a butterfly, purely because the butterfly is perceived as beautiful.Recently, for example, a study revealed that a small cleaner wrasse (Labroides dimidiatus) has a level of self-awareness (passing the mirror test) far exceeding what scientists expected. Just as we underestimate marine life, we will probably never be able to admit that certain computational models (or less charismatic animals) are conscious simply because it “doesn’t feel right” for our world view.Read also:Inside AI Brains: How Anthropic Deciphered Claude’s Thought ProcessWhy I’m Breaking Up With Vibe CodingFrom Developers to Scientists: How AI is Transforming Code ComplexityYou can contact me about this and other topics by filling out the form below.Subscribe · FreeA monthly letter + a free IP read just for you.Subscribe and reply to the welcome email with what you're working on. I'll send back a short, honest take on patentability or prior art.✓ Subscribed · check your inboxMonthly · no spam · 1-click unsubscribe#inteligencia-artificial#filosofia#claude#richard-dawkins#teste-de-turing#teste-da-borboleta
※ 著作権に配慮し、引用は冒頭3段落までです。続きは元記事をご覧ください。