QR2.4.3 The Transfer Problem

When two processors exchange data they work together to ensure that what is sent is received. If one processor sends two information packages and the receiver can only handle one, the second is lost. For a virtual reality, the “thing” the second package represents then disappears, forever. For a virtual reality like SimCity, this transfer problem might cause a building to suddenly disappear. Imagine if our world did that! Our universe has run for billions of years with no evidence that even a single photon has been lost. So if our universe is a virtual reality, the quantum network must have solved the transfer problem.

Our systems solve the transfer problem using protocols, basically rules that ensure no information is lost in transfers, e.g. http is an Internet transfer protocol. There are many protocols but they mainly involve:

1. Locking. Lock a file for exclusive access before writing to it.

2. Clock rate. Motherboards use a common clock rate.

3. Buffers. Network nodes have memory buffers to store overloads.

Figure 2.13 Deadlock

Locking. My computer stores this chapter as a file on disk. To edit it, I load the file into a word-processor, change the words, then save it back to the disk. While I am editing, the system “locks” the document exclusively, so if I try to open the same file for a second edit, it says it is “in use”. This avoids the possibility that two people edit the same text and the last person to save overwrites the changes of the first. Locking is also inefficient as a change now requires two steps, which allows the deadlock case in Figure 2.13, where node A waits to confirm a lock from node B that is waiting for a lock from C that is waiting for a lock from A, so each of them waits forever. If a quantum network followed this protocol, we would encounter “dead” areas of space, which we do not. Something better is needed.

Clock rate. Motherboards avoid the double-send of locking by using a common clock rate. When a fast central processing unit (CPU) issues a command to move data into a slow register it must wait for that to happen. Waiting too long wastes CPU cycles but trying to use the register too soon gives garbage from the last command. The CPU can’t “look” to see if the data is there because that is another command that needs another register that would also need checking! Instead it uses a clock rate to define how many cycles to wait for any task to be done. The CPU gives its command then waits that many cycles before using the register. The clock rate is usually set at the speed of the slowest component plus some slack, so one can “over-clock” a computer by reducing the wait cycles from the manufacturer default to make it run faster – until at some point this gives fatal errors. This solution requires a system with a central clock but we know that our universe doesn’t have a common time. A universe that ran to a central clock would cycle at the rate of its slowest node, say a black hole, which would be massively inefficient. Again something better is needed.

Buffers. Early networks solved the transfer problem using centralized protocols like polling, where every event went through a central processor but this was soon found to be inefficient. Protocols like Ethernet improved efficiency tenfold by distributing control, to let nodes run at their own rate with buffers to handle any excess. So if a node is busy when another transmits, the buffer stores the data until it is free. A buffer lets a fast device work with a slow one, e.g. if a computer (fast device) sends a document to a printer (slow device), it sends to a printer buffer that feeds the printer. This lets you to carry on using your computer while the document prints. Yet planning is needed as too big buffers waste memory while too small buffers can overflow. The Internet fits buffer size to load, with big buffers for backbone servers like New York and little buffers for backwaters like New Zealand. Yet this would not work to simulate a universe. Galaxies are the “big cities” of our universe but where they occur is not predictable and allocating even small buffers to the vastness of space would waste memory. And since the quantum network by its nature has no static memory, it cannot use buffers.

Next