The following is a thought-experiment from David Chalmers (check out his webpage; the link is on the blogroll):
We could have been characters in a huge computer simulation. It is a familiar idea that the whole world might be simulated on a computer, and things would seem exactly the same to us (and indeed, who is to say that we are not).
I imagine, though, a different sort of simulation, of the kind common in the fields of artificial intelligence and artifiicial life, where we have (1) a simulated environment and (2) simulated beings which are “moving” through this environment, according to a program that models these beings’ thought processes and their decisions. Imagine a very complex project like this (like the vivarium, say), perhaps with genetic algorithms which get more and more complex and sophisticated, until eventually very sophisticated, rational beings evolve.
When they speculate about the world, they will find that the environment possess certain regularities, and this will lead them to laws of “physics” about their external world. This will lead them to speculate about whether they too, at the bottom line, are subject to the same laws. This might seem plausible…but of course it will not be the case! Their “mental” life obeys a completely different set of laws, and further these laws are off limits for direct observation. Their mental life takes place not within their world at all, but within in a computer in a compltely different universe! When it comes to observing the “laws” of their behaviour, they will reach some dead end in looking for causal mechanisms. Unlike our world, such mechanisms are simply not “locally supported” by simple physical laws. I’m not quite sure what would happen next.
If they tried to “look inside their heads” (assuming they have at least vaguely coherent senses)… They’d just find an empty box. They’d ask “how can I do all this complex processing”. The answer would have to be, well, I’m just kind of non-material mind. Of course, there would be a breakdown in the usual kind of physical causation around the “heads” of such a being, unlike our world.
Moral: Cartesian Dualism isn’t quite so outlandish and conceptually problematic as tends to be supposed.
Huenemann: now consider a modification on this scenario. Suppose that, when the virtual beings look inside their heads, they don’t find an empty box. Suppose they find little virtual neurons and chemicals and stuff, like what we find in our heads. And suppose they find that all their behavior actually can be explained by the laws they think apply in their world, and the operation of their neurons. Suppose, in short, they arrive at a materialistic understanding of their own behavior — given the “laws of nature” which seem to apply to their virtual world.
We know that their explanations are wrong and incomplete. They don’t know the truth — which is that they are in a computer-generated world. But what would it be rational for them to believe? It seems they should be materialists, and the dualists among them wouldn’t really have any evidence at all to support their view.
It seems to me that this describes the situation for dualists in our world. Yes, there is a way to make dualism true — but the way is to deny any measurable causal efficacy for consciousness. It is an idle wheel in explanations of behavior. True, the whole material mechanism of the brain might be a device which somehow “mirrors” or “captures only a part of” the whole, dualistic truth (as in the updated scenario), but there isn’t any need for that hypothesis in order to explain what we observe.