· Although the individual in the Chinese room does not understand Chinese, perhaps the person and the room considered together as a system do.
o Someone might in principle memorize the rule book; they would then be able to interact as if they understood Chinese, but would still just be following a set of rules, with no understanding of the significance of the symbols they are manipulating.
· Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. Surely then it would be said to understand what it is doing?
o Suppose that, unbeknownst to the individual in the Chinese room, some of the inputs he was receiving came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean.
· Suppose that the program instantiated in the rule book simulated in fine detail the interaction of the neurons in the brain of a Chinese speaker. Then surely the program must be said to understand Chinese?
o Such a simulation will not have reproduced the important features of the brain—its causal and intentional states.
· But what if a brain simulation were connected to the world in such a way that it possessed the causal power of a real brain—perhaps linked to a robot of the type described above? Then surely it would be able to think.
o Such a machine would have to have the same causal powers as a brain. It would be more than just a computer program.
We see that the objections to the Chinese Room Argument are weak and based on mental functionalism and physicalism, both of which are false.