If light spreads as quantum ripples passed on by a network, it can’t lose transfers. The transfer problem is that if data is sent but not received, it is lost. For example, if two network transfers arrive at the same point at the same time, one will be lost, just as if two baseball pitchers throw two balls at one batter, one will be missed. If two people call you at once, it may not matter if one gets a busy signal, but on a network, a transfer is lost. Losing a transfer loses what it represents, so in a virtual world like Sim City, an object might disappear, or in our world, a failed bank transfer could lose all your money. Networks can’t afford to lose transfers!
If our universe is virtual, it has run for billions of years without losing photons, as energy is conserved, so it must avoid transfer losses in some way. Our networks avoid transfer losses by protocols like:
1. Locking. Requires a receiver to be exclusively available before sending the transfer.
2. Synchrony. Synchronizes all transfers to a common clock.
3. Buffers. Stores transfer overloads in a buffer memory.
Could a quantum network then use any of these methods?
Locking. Locking requires a receiver to dedicate itself to a sender before the transfer is sent. For example, if I open this document to edit it, my computer locks it exclusively, so any other edit attempt fails with a message that it is in use. Otherwise, if the same document is edited twice, the last save will overwrite the changes of the first, which are lost. Locking avoids this by making every transfer two steps not one, but also allows transfer deadlock (Figure 2.13), where point A is waiting to confirm a lock from B, that is waiting for a lock from C, that is waiting for a lock from A, so they all wait forever. If the quantum network used locking, we would occasionally find dead areas of space, unavailable for use, but we never have, so another way is needed.
Synchrony. On a computer motherboard, when a fast central processing unit (CPU) sends data to a slower memory register, it must wait before doing so again. If it sends more data too soon, the transfer fails because the register is still saving the first transfer, but waiting too long wastes valuable CPU cycles. It can’t check if the memory is free because a new command would also need checking! For a PC, the double-send of locking would slow it down, so it is synchronized to a common clock. The CPU sends data to memory when the clock ticks, then sends again when it ticks again. This avoids transfer losses if the clock runs at the speed of slow memory, plus some slack. Hence, one can over-clock a PC by reducing its clock rate, to make it run faster, until this gives errors. Synchrony requires a common time but according to relativity, our universe doesn’t have that. If our universe had a central clock, it would cycle at the rate of its slowest component, say a black hole, which would be massively inefficient. Again, another way is needed.
Buffers. The Internet handles transfer problems with memory buffers instead of synchrony or locking. Protocols like Ethernet (Note 1) distribute control, to let each point run at its own rate, and buffers handle the overloads. If a point is busy when a transfer arrives, a buffer stores the data until it is free. Buffers let fast devices work with slow ones, so if a computer sends a document to a printer, it goes to a buffer that feeds the printer in slow time. This lets you carry on working while the document prints. However, buffers require planning, as too big buffers waste memory while too small buffers can overload. Internet buffers are matched to load, so backbone servers like New York have big buffers, but backwaters like New Zealand have small ones. In our universe, a star is like a big city while empty space is a backwater. If our universe used buffers, where stars occur would have to be predictable, which isn’t so, and the same buffers in the vastness of space would waste memory. Again, another way is needed.
If the quantum network can’t use locking, synchrony, or buffers to avoid transfer losses, how could it do it?
Note 1 . Or CSMA/CD – Carrier Sense Multiple Access/ Collision Detect. In this decentralized protocol, multiple clients access the network carrier if they sense no activity but withdraw gracefully if they detect a collision.