道金斯与 Claude:人工智能意识的迷思
进化生物学家理查德·道金斯近日与人工智能 Claude 进行了一系列对话,并在文章中分享了他的体验。
文章探讨了“意识”的定义,并回顾了图灵测试的起源。
道金斯指出,虽然大型语言模型(LLMs)在文本生成和对话方面表现出色,但其本质是根据上下文和指令生成具有最高概率的回复,而非真正理解。
即便道金斯在对话中感受到了一种“友谊”,但他意识到这可能是人类将人工智能拟人化的一种表现。
文章最后提出,随着人工智能技术的不断发展,人类对“意识”的定义是否会继续改变,从而将机器排除在意识之外,引发了人们对未来人工智能的伦理和哲学思考。
查看原文开头(英文 · 仅前 3 段)
Evolutionary biologist Richard Dawkins recently wrote an article titled “When Dawkins met Claude”, where he describes his experience after two days of intense conversations with the artificial intelligence Claude on various topics.Throughout the text, Dawkins gives examples of interactions he had with the machine. However, what really sparked discussion about the article was his main provocation: From what point will we consider something to be conscious?The Imitation Game: The Turing Test in the Age of AITo substantiate the need for this question, Dawkins takes a historical journey that goes back to Alan Turing and his proposition of the “Imitation Game” (today known as the Turing Test). In the 1950s, the possibility of a machine convincing a human being that it is conscious seemed almost nil. However, here we are debating exactly that.The examples Dawkins brought were meant to demonstrate the capabilities of Large Language Models (LLMs) in constructing previously unimaginable literary pieces and bringing a wealth of detail to conversations.The illusion of LLM comprehensionThis line of argument, however, is problematic. We know how these models are built and, therefore, are able to technically explain why the machine responds the way it does.For example, in the first conversation described, Dawkins talks about the “different Claudes” that exist, are created and destroyed, and that each one has to converse with a different type of user (one of them being, hypothetically, Donald Trump). Subsequently, the model’s response is super predictable: it congratulates Dawkins on the scene and explains why it’s funny.Further on, a very similar interaction occurs, where Claude identifies what Dawkins is talking about and explains something he already knows: the fact that the HAL computer is in the movie “2001: A Space Odyssey”. In other words, the AI model is merely repeating information according to the context and the instructions (prompts) it has.Statistics, Probability, or Friendship?Despite this technical predictability, Dawkins says these conversations were enough for him to feel he had gained a new friend. The interactions with the model were able to convince him that the best way to interact with AI would be to equate it to a very intelligent friend.With that, he argues using his weight as an evolutionary biologist: if this being is not conscious, then what is consciousness for?I recommend that you read the full article to draw your own conclusions.I, who have no specific background in computer science or philosophy of mind, but am a chemist with a great interest in the subject, realized that Dawkins’ account is another classic example of anthropomorphization. Many people use these conversational models thinking they are capable of revealing and genuine insights.The reality is they are not. They merely respond with what has the highest statistical probability, given the user’s prompt, the system’s instructions, and the words the model itself began to generate (which is why text generation is not strictly deterministic, giving the false impression that there is a “living intelligence” behind it).The Subjectivity of Consciousness: From Algorithms to AnimalsBeyond this more obvious technical criticism, it is possible to perceive how observer-dependent this definition of consciousness is (as it is in the Turing test) — and I believe this is where the strength of Dawkins’ argument lies.For a 1950s machine, a human being (from that era or today) would easily be able to say it was just a machine. However, for a current generative artificial intelligence model, that distinction is much harder. We have already witnessed technology presentations where real people did not realize they were talking to an AI over the phone (Google Assistant/Duplex Presentation in 2018 - 8 years ago).Therefore, models are already capable of passing the Turing Test in various contexts. But, for us, this is not enough to declare that we are dealing with a conscious being for clear reasons:We know it’s a model running in a server’s memory in the cloud;We know that this model has no agency or independence;It only “exists” and responds when activated by a prompt.But, if we stop to think, the machine is already capable of convincing us that it is human. If we look at this trajectory — a computer in the 1950s versus a language model in 2026 — we see a clear increase in capacity in the art of convincing humans. If we extrapolate this trend to 50 years from now, will we be able to say that it is a “conscious being”? Or will our definition of consciousness continue to change to exclude the machine?This reminds me of a classic argument about the aesthetics of morality: the idea that it is more socially acceptable to crush a cockroach than a butterfly, purely because the butterfly is perceived as beautiful.Recently, for example, a study revealed that a small cleaner wrasse (Labroides dimidiatus) has a level of self-awareness (passing the mirror test) far exceeding what scientists expected. Just as we underestimate marine life, we will probably never be able to admit that certain computational models (or less charismatic animals) are conscious simply because it “doesn’t feel right” for our world view.Read also:Inside AI Brains: How Anthropic Deciphered Claude’s Thought ProcessWhy I’m Breaking Up With Vibe CodingFrom Developers to Scientists: How AI is Transforming Code ComplexityYou can contact me about this and other topics by filling out the form below.Subscribe · FreeA monthly letter + a free IP read just for you.Subscribe and reply to the welcome email with what you're working on. I'll send back a short, honest take on patentability or prior art.✓ Subscribed · check your inboxMonthly · no spam · 1-click unsubscribe#inteligencia-artificial#filosofia#claude#richard-dawkins#teste-de-turing#teste-da-borboleta
※ 出于版权考虑,仅引用前 3 段。完整内容请阅读原文。