QR6.2.9 The Binding Problem

Figure 6.33 The idea of an “internal viewer” generates an infinite regress of internal viewers

Different brain areas process sight, sound and smell data that is somehow “bound” into one perception that gives thoughts, feelings and actions. Descartes explanation was that all sense data cleared through the pituitary gland to present to the conscious mind, like a little man in the brain watching a movie, but that little man would need another little man inside him to see his movie, and so on (Dennett, 1991) (Figure 6.33). The mind viewing a picture doesn’t make sense but physical realism isn’t much better as it concludes that each neuron in the brain:

“… doesn’t ‘know’ it is creating you in the process, but there you are, emerging from its frantic activity almost magically.” (Hofstadter & Dennett, 1981)p352.

That nerves with no ability to observe just act and “there you are” makes even less sense than dualism. The mind-body problem of centuries ago remains in neuroscience today as the mystery of how distant brain areas bind their activity to create a single observer/actor:

One of the most famous continuing questions in computational neuroscience is called ‘The Binding Problem’. In its most general form, ‘The Binding Problem’ concerns how items that are encoded by distinct brain circuits can be combined for perception, decision, and action.(Feldman, 2013) p1.

The binding problem arises because distinct processing hierarchies can’t just exchange data. They can’t “talk” as global workspace theory says they do (6.1.6) because when a visual cortex nerve fires to register a line, it doesn’t send a message saying “I saw a line” like a little person. It just fires a yes-no response like any other neuron. To bind that response to another feature like redness needs higher processing in the same hierarchy. At each step in a processing hierarchy, a nerve can fire to trigger a motor response but it isn’t an integrated experience because the nerve doesn’t know why it fired. The six-layered visual cortex can process lines, shapes, colors and textures but the last nerve to fire in a sequence knows no more than the first. To integrate vision and smell needs a higher area to process both outputs but brain studies show this doesn’t happen.

Different areas evolved to process sight, smell, sound, thoughts, feelings, touch and memory but no area evolved to integrate them all. If it had, the brain would be wired like a computer motherboard with many lines to a central processor, but it isn’t. The processing of each local brain area is encapsulated, so brain areas can’t exchange any experience they have:

Because of the principle of encapsulation, conscious contents cannot influence each other either at the same time nor across time, which counters the everyday notion that one conscious thought can lead to another conscious thought … content generators cannot communicate the content they generate to another content generator. For example, the generator charged with generating the color orange cannot communicate ‘orange’ to any other content generator because only this generator (a perceptual module) can, in a sense, understand and instantiate ‘orange’.(Morsella et al., 2016) p12

A single processing hierarchy for all brain areas would also take too long. Deep processing like thought takes longer to happen so is often too slow to be useful. Our brain can integrate perceptions with memory to drive motor acts in less than a second but if one hierarchy did this it would be too slow. The binding problem is that we have a brain integration that its wiring doesn’t support. In information terms, the unified experience of senses, feelings, thoughts and actions that we have should be impossible.

For example, cortical hemispheres can’t swap data so the nerves linking them don’t let both see the entire visual field. If the hemispheres share a body like Siamese twins, the split-brain left hemisphere should report a loss of data from the right, but it never does:

“… despite the dramatic effects of callosotomy, W.J. and other patients never reported feeling anything less than whole. As Gazzaniga wrote many times: the hemispheres didn’t miss each other.” (Wolman, 2012)

Why doesn’t cutting the corpus callosum give a sense of data loss? If the optic nerve is cut, we know we are blind as no data comes from the eyes. If an injury cuts the spinal cord, we know we are paralyzed as no data comes from the legs. But when the millions of nerves between the hemispheres are cut, both carry on as normal! Why doesn’t the left hemisphere report an absence of data from the right? If it normally sees the entire field thanks to the other hemisphere, it should report being half blind. It follows that it doesn’t report any missing data because there is none.

Encapsulation predicts what the evidence suggests, that the left hemisphere only processes the left visual field, even when the corpus callosum is intact. But if the corpus callosum doesn’t transfer data, how does the observer experience of one visual field arise?

The effect of cutting the corpus callosum isn’t data loss but a divided consciousness. One patient couldn’t smoke because when the right hand put a lit cigarette in his mouth, the left hand removed it, and another found her left hand slapping her awake if she overslept (Dimond, 1980) p434. Conflicts made simple tasks take longer – one patient found his left hand unbuttoning a shirt as the right tried to button it. Another found that when shopping, one hand put back on the shelf items the other had put in the basket. One patient struggled to walk home as one half of his body tried to visit his ex-wife while the other tried to walk home. These extraordinary but well documented cases show that cutting the corpus callosum gave two hemispheres with different experiences and different opinions of what the body should do.

The next section suggests that the eight million nerves of the corpus callosum don’t swap data but help create one consciousness from two hemispheres. Studies suggest that the links between the hemispheres allow neural synchrony that in turn correlates with consciousness (Pockett, 2017). Consciousness is then the ability to integrate diverse information to yield adaptive action (Morsella, 2005).

Next