人工智能是否拥有人类心智 Scientists still don't know the answer to this infamous question - Charles Wallace & Dan Kwartler

未能成功加载,请稍后再试
0/0

After waking up alone in a locked room, two documents are slipped under your door, a note in an alien language and a detailed instruction manual in your language.

The manual explains that for each alien character in the note, you should write an indicated corresponding symbol.

Following this chart, you write a response that you slip out the door.

And for the next several days, this exchange continues.

Outside the room, alien scientists are thrilled because they believe you're conversing with them, but you still have no idea what these characters mean.

This scenario isn't just a bizarre misunderstanding, it's a valuable thought experiment for understanding artificial intelligence.

Philosopher John Searle developed the original version of this premise in 1980 as a response to some of the AI work being done at the time.

But while modern AI models don't work like those outdated machines or the prisoner in Searle's hypothetical, the question motivating his thought experiment is still relevant.

To quote Searle, he wanted to interrogate whether an appropriately programmed computer literally has cognitive states.

In other words, if a computer looks like it understands something, does that mean it actually understands the way a human does?

下载全新《每日英语听力》客户端,查看完整内容