Thinking about the impressive capabilities of LLM these days, I thought of some other takes on the [[Turing test]], or the [[Chinese room]] idea.

  • What if I’ve never tasted wine before but I can fake it.

  • A blind person describing the color red.

  • A con artist who pretends to love you.

  • If a car can drive as good as a human, then does it have awareness of the road like the human driver?

I suppose the Chinese room uses knowledge of Chinese as an analogy for consciousness. Assuming both pretending to know Chinese and pretending to be conscious is possible in the first place. Which this assumption could be totally wrong. Maybe that is the point of the Chinese room thought experiment.

Where as simpler tests are already known to be too easy to fake.

I guess that is the exact point of the Turing test. If we can’t tell the difference we should assume it is conscious.

The interesting thing is that a lot of humans fake things.

I keep noticing how much human brains work like LLMs not the other way around.