ChatGPT再次“失误”:关于草莓中字母“e”的数量
一位用户在深夜通过测试ChatGPT时,意外发现模型持续返回“草莓中含有3个e”的答案,而用户原本以为是在测试“r”的个数。
这引发了对AI模型训练偏差、过度补偿以及用户思维误导等问题的思考。
随后用户在测试“十七”单词时也遇到了同样的问题,最终意识到自己一直是在询问“e”的个数。
此事件再次引发关于ChatGPT的准确性和训练方式的讨论,并被用户分享到Hackernews。
查看原文开头(英文 · 仅前 3 段)
I just want to say one word: Wow.
Thoughts
So what happened: I think that I have a minor suspicion that just as how as I was typing e instead of r, chatgpt itself also as it was previously in hot boil water over the amount of r in strawberry, might have actively been trained to say 3 and it didn't expect me to ask the number of e's in strawberry. I find it incredibly hilarious that the same word strawberry has caused openai downfall two times, one about r and then another about e.
※ 出于版权考虑,仅引用前 3 段。完整内容请阅读原文。