A classic brain information theory argument is the silicon chip speculation, that replacing every neuron in the brain with an equivalent silicon chip wouldn’t alter consciousness:
“… imagine that one of your neurons is replaced by a silicon chip prosthesis that has the exact same input/output profile as the neuron it replaces. At the core of this thought experiment is the presumption that such a replacement would be unnoticeable to you or to anyone observing your behavior. Presumably, you would continue to experience pain even though the physical realization of those mental events includes a silicon chip where an organic neuron used to be. Now imagine that, one by one, the rest of your neurons are swapped for silicon prostheses. Presumably there would be no change in your mental life even though your brain, which was once made of lipid and protein neurons, is now entirely composed of silicon neuronoids.” (Mandik, 2004).
No evidence supports this speculation except the belief that brains are biological computers. That the brain equates to a set of wired chips isn’t supported by neuroscience, as transistors are insulated from electromagnetic fields while neurons broadcast them, as brainwaves show.
Replacing a neuron with a chip might duplicate its wiring but not its broadcast field, so just as replacing a cellphone antenna with a computer would diminish a network, replacing neurons with chips would reduce the brainwaves that correlate with consciousness. If consciousness derives from nerve entanglements, the end result of the silicon experiment would be a brain with no more consciousness than a computer. The silicon chip speculation is science fiction posing as science fact, like the singularity prediction, that computers will soon become conscious (Kurzweil, 1999), as neither have any basis in evidence.
A related claim is that if we could copy matter atom-for-atom, copying the brain would copy its consciousness. If the physical world is all there is, physically identical brains are the same in everything, but if one “me” tends the garden while another cooks a meal, how can one I experience two events? If my copy went to work while “I” lay in the sun, I have no knowledge of a day at work, so my “copy” didn’t replace me at all. It follows that physically copying a brain creates another self, not a new myself. Identical twins are, initially at least, largely identical, but they are different beings with different choices and lives.
Given a physical copy, Chalmers argues that the original is conscious but the copy is a zombie, while for Dennett, both are zombies imagining they are conscious. In quantum realism, two identical brains would generate two conscious beings that independently choose. Splitting one brain into two hemispheres gives two I’s that can come into conflict so why wouldn’t a perfect copy be the same? This clearly isn’t beneficial so if I made a perfect copy of myself that was also conscious, who is to say it wouldn’t decide to kill me? The brain evolved to unify consciousness, not to divide it, for a reason.