QR2.4.3 Losing Transfers

When network nodes transfer data they must ensure that what is sent is received, as if one node sends two data packages and the receiver only gets one, the second is lost forever. Losing a data transfer like that in SimCity might cause an object in it to suddenly disappear for no reason. If our world did that, we would notice! Our universe has run for billions of years with no evidence that even a single photon has been lost so if it is a network-based virtual reality, it must have some way to avoid transfer losses.

Our networks avoid transfer losses by transfer rules like the Internet hypertext transfer protocol or http. Such rules include:

1. Locking. Lock a file for exclusive access before the transfer.

2. Clock rate. Set a common clock rate for transfers.

3. Buffers. Use memory buffers to store transfer overloads.

Figure 2.13 Transfer Deadlock

Locking. My computer stores this chapter as a file on disk and if I load it into a word-processor to change it, the system “locks” it exclusively. If I try to edit the same document a second time, it says it is “in use” and won’t let me. Otherwise, if I edit the same document twice, the last save would overwrite the changes of the first, which are lost. Yet locking allows the transfer deadlock case in Figure 2.13, where node A waits to confirm a lock from B that is waiting for a lock from C that is waiting for a lock from A, so they all wait forever. If the quantum network used locking, we would encounter “dead” areas of space, which we do not. Another way is needed.

Clock rate. Motherboards avoid the double-send of locking by using a common clock rate. When a fast central processing unit (CPU) fetches data into a slow register, it must wait for it to happen. Waiting too long wastes CPU cycles but using the register too soon gives garbage from the last event. The CPU can’t “look” to see if the data is there because that is another command that needs another register that would also need checking! So it uses the clock rate to define the cycles to wait for any task to complete. The CPU gives its command then waits that many cycles before using the register. The clock rate is usually set at the speed of the slowest component plus some slack, so one can over-clock a computer by reducing the manufacturer’s default wait cycles to make it run faster, until at some point this gives errors. This requires a system with a central clock but we know that our universe doesn’t have a common time. A virtual universe that ran to a central clock would cycle at the rate of its slowest node, say a black hole, which would be massively inefficient. Again, another way is needed.

Buffers. Early networks avoided transfer losses by protocols like polling that route every event through a central node but centralization was soon found to be inefficient. Protocols like Ethernet improved efficiency tenfold by distributing control, letting nodes run at their own rate and using buffers to handle overloads, so if a node is busy when another transmits, the data is stored in a buffer until it is free. Buffers let fast devices work with slow ones, so if a computer (fast device) sends a document to a printer (slow device), it goes to a printer buffer that feeds the printer in slow time. This lets you carry on using your computer while the document prints. Yet planning is needed, as big buffers can waste memory while small buffers can overload. The Internet fits buffer size to load, with big buffers for backbone servers like New York and little buffers for backwaters like New Zealand. If our universe is virtual, stars are like “big cities” while empty space is like a backwater where not much happens. To use buffers, the network would have to know in advance where stars occur which is unlikely, and allocating even small buffers to the vastness of space would waste memory. Again, another way is needed.

Next