Do you know what Gordon Moore actually said? In 1965 Gordon Moore observed that if you graphed in the increase of transistors on a planar semiconductor device using semi-log paper, it would describe a straight line. This observation ultimately became known as Moore’s law. The “l” is lower case in the academic literature because the law is not some grand organizing principle that explained a series of facts. Rather it was simply an observation. Moore adjusted the pronouncement in 1975 to set the vertical scale at every two years (Simonite, 2016). This so-called law has been the social imperative that has fueled innovation in the semiconductor manufacturing industry for well over 50 years. But it was a social imperative only (Jordan, 2010). It was clear from the beginning that the physics of the material would eventually get in the way of the imperative.
There is a physical limit to how far you can shrink the size of the individual devices using silicon dioxide, the underlying material of which all our electronics is made. That limit appears to be about 10 nanometers (Jordan, 2010; Simonite, 2016). There are also other more practical reasons why this limit may be unachivable such as heat disapation (Jordan, 2010). Although, given the cell phone industry seems to be driving the technology of late, significant strides have been made in reducing power consumption of these devices. This lower power consumption implies less heat generation. It also seems to imply getting away from a purely Van Neuman computational architecture toward a more parallel approach to code execution.
This brings us to the fundamental question: what technology is next? When will that technology emerge into the market place? My own research into these questions resulted in some rather interesting answers. One of the more surprising responses was the consensus about what was meant by emerging into the market place. The consensus of the Delphi panel I used in my research was when there was a full scale prototype ready for rigorous testing (Jordan, 2010). One of the most surprising answers addressed the consensus about what the technology would be that replaces silicon dioxide. My research suggests the replacement technology would be biologic in nature, RNA perhaps? The research also suggests this new technology would certainly emerge within the upcoming 30 years (Jordan, 2010). Given the research was conducted nine years ago, this suggests the new technology should be ready for full-scale prototype testing in about 20 years from now. I will address why this time frame is of significance shortly.
It turns out that this question of using RNA as a computational technology is being actively investigated. It would be difficult to predict to what extent this technology may mature over the next 20 years. But, in its current state of development, the computational speed is measured on the scale of minutes (Berube, 2019, March 7). Ignoring the problem of how one might plug a vat of RNA into a typical Standard Integrated Enclosure (SIE) aboard a US submarine, speeds on that scale are not particularly useful.
The Holy Grail of the next generation of these technologies is undoubtedly quantum computing (Dyakonov, 2019). There seems to be a lot of energy behind trying to develop this new technology with a reported “…laboratories are spending billions of dollars a year developing quantum computers.” (Dyakonov, 2019, p. 26). But we are left with the same question of when? Dyakonov divides projections into optimistic and “More cautious experts’ prediction” (p. 27). The optimists are saying between five and 10 years. The so-called more cautious prediction is between 20 and 30 years. This more cautious realm fit with my research as well (Jordan, 2010).
The real problem with achieving a working quantum computer is the shear magnitude of the technical challenges that must be overcome. In a conventional computer, it is the number of states of the underlying transistors that determine the computational ability of the machine. In this case a machine with N transistors will have 2N possible states. In the quantum computer, the device is typically the electron that will have a spin of up or down. The probability of a particular electron spin being in a particular state varies continuously where the sum of the probability of up and the probability of down equaling 1. The typical term used to describe a quantum device used in this way is the “quantum gates” (Dyakonov, 2019, p. 27) or qubits. How many qubits would it take to make a useful quantum computer? The answer is somewhere between 1,000 and 100,000 (Dyakonov, 2019). This implies that to be able to make useful computations a quantum machine would have to something on the order of 10300 qubits. To illustrate how big a number that is I quote: “it is much, much greater than the number of sub-atomic particles in the observable universe.” (Dyakonov, 2019, p. 27). The problem is that of errors. How would one go about observing 10300 devices and correcting for errors? There was an attempt in the very early years of this century to develop a fault-tolerant quantum machine that used 50 qubits. That attempt has been unsuccessful as of 2019.
The basic research being done is of considerable value and much is being learned. Will we ever see a full-scale prototype ready for rigorous testing? I am beginning to doubt it. I am of the opinion that a usable quantum computer is not unlike controlled fusion: the ultimate solution, but always about 10 years out. So next year, our quantum computer (and controlled fusion for that matter) will not be nine years out but still another 10 years.
Dyakonov, M. (2019, March). The case against quantum computing. IEEE Specturm, pp. 24-29.
Jordan, E. A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. Doctoral Dissertation. AZ: University of Pheonix.
Simonite, T. (2016, May 13). Morre’s law is dead. Now what? Retrieved from MIT Technology Review: https://technologyreview.com