Scenario Plans (& Delphi Research)

I looked into the future, and the time to act is now.

Beyond Moore’s law, Beyond Silicone Chips

Beyond Moore’s law (by Dr Ed Jordan)

After almost 60 years, Moore’s law, related to the doubling of computing power every year-and-a-half-ish, still holds. At the current exponential speed, there is a brick wall looming in the foreground: the physical limitations of silicon chips. The most straightforward example of how that might impact a company is to look at Intel Corp. But first more on Moore’s law and the more general idea of learning curves.

At some point, the semiconductor material currently in use will no longer support the improvements necessary to meet the requirements of an expanding Information Technology infrastructure. In times past, the way Moore’s law characterized how the growth in the density of computer chips was at the heart of the IT revolution, we have experienced. Interesting, Moore himself viewed this so-called law as an observation, not some grand organizing principle that helps us understand a set of observations that have stood the test of time with many replications. The reason we speak of Newton’s Third Law of Motion, for instance, as a Law is because of the profound success in explaining what we now call Classical Mechanics (Newton’s Laws of Motion, 2020). On the other hand, Moore simply graphed the growth of the number of components on computer chips vs. time using semi-log paper. While the resulting straight line is striking, it does not explain some underlying physical phenomenon (Jordan, 2010). For this reason, the purists among us will always refer to this observation using the lowercase form of law. Below is a picture of the original presentation that Gordon Moore made in 1965 of the characterization (Rhines, 2019).

A screenshot of a cell phone

Description automatically generated
Figure 1: Moore’s law

Figure 1: Moore’s law, as presented in 1964. From Predicting semiconductor business trends after Moore’s law. Used with permission

The impact that Moore’s observation on the semiconductor industry proved to be more of an organizational imperative than an organizing principle that explains a body of observations. As Rhines (2019) points out, over time, Moore has added serval caveats to the original observation.

A screenshot of a cell phone

Description automatically generated
Figure 2 Moore’s caveats vs time

Figure 2: Moore’s caveats vs time. From: Predicting semiconductor business trends after Moore’s law. Used with permission

Recently, I had the great privilege of attending a webinar hosted by SemiWiki (Rhines, 2020). In this Webinar, Walden Rhines presented where he thought the semiconductor industry was going, what could be called the Learning Curve Roadmap to device improvement, and when some new material may be required to replace the current technologies. Dr. Rhines suggested that if one considers the mathematics of the learning curve, plotting revenue per transistor vs. cumulative transistors shipped using a log-log graph, you have a better picture (Figure 3) of where the industry has gone than the original Moore’s law.

A screenshot of a cell phone

Description automatically generated
Figure 3: Log-Log Predicting Semiconductor Business Trends

Figure 3: Log-Log View: From: Predicting semiconductor business trends after Moore’s law. Used with permission

The idea of learning curves takes several forms (Learning Curve, 2020). From the shape of the plot presented by Rhimes (2019), it appears what was meant was a power-law relationship between learning and experience. With the correct selection of coefficients, one can imagine a cure, such as we see in Figure 4 (Learning Curve, 2020).

A picture containing screenshot

Description automatically generated
Figure 4: Learning vs Experience

Figure 4: Learning vs Experience – Power Law: From https://en.wikipedia.org/wiki/Learning_curve

As part of the discussion during the webinar, Rhimes goes on to show (Figure 5) that if you plot (log-log) cost per function vs. time (Rhines, 2019, p. 20), you see the same kind of straight-line suggestive of a power-law relationship between learning and experience (Learning Curve, 2020).

A screenshot of a cell phone

Description automatically generated
Figure 5: Cost over Time

Figure 5: Normalized Chip Cost. From: Predicting semiconductor business trends after Moore’s law. Used with permission

Rhimes further suggest that it is instructive to think of semiconductor trends in terms of the so-called Gompertz Curve (Figure 6). This curve has a characteristic “S” shape and is often used to predict saturation, among other things (Rhines, 2019, p. 22).

A screenshot of a cell phone

Description automatically generated
Figure 6: Gompertz Curve

Figure 6:Gompertz Curve. From: Predicting semiconductor business trends after Moore’s law. Used with permission

The Gompertz Curve is a specialized application of the sigmoid function, again from the mathematics of learning (Learning Curve, 2020). As described by Rhimes, the so-called point of inflection of the Gompertz Curve is particularly interesting (Figure 7).

A picture containing meter

Description automatically generated
Figure 7: Lifecycle S Curve

Figure 7: Lifecycle S Curve. From: Predicting semiconductor business trends after Moore’s law. Used with permission

In the context of this discussion, this particular application suggests the point that growth of the cumulative unit volume of transistors will begin to slow. This point is around 2038. As the title of Figure 8 suggests, that is the point at which a new technology will be necessary.

A screenshot of a computer screen

Description automatically generated
Figure 8: Projected Transition Point

Figure 8: Projected Transition Point. From: Predicting semiconductor business trends after Moore’s law. Used with permission

In 2010, Jordan found that a new technology should emerge in the timeframe of 10 to 30 years (which should be before 2040). What constituted emergence was also defined in the study. Emergence was defined to mean that point when a prototype would be available for full-scale testing (Jordan, 2010). This definition seems to fit comfortably in the construct, as presented during the webinar (Rhines, 2019). The work by Rhimes (2019) and Jordan (2010), separated by almost a decade, suggest approximately the same answer. By about 2040, a new technology should emerge that will ultimately replace the semiconductor materials that are in current use today. What is important is that these two lines of reasoning, using profoundly different approaches and divergent frames of reference, arrive at essentially the same conclusion. This convergence suggests the confirmability of both.

So what does this have to do with scenario planning in general, and what does that mean for chip companies like Intel? And, what will the likely disruptive technology be that replaces the silicone chip? Jordan (2010) identified likely disruptive technologies.

We at ScenarioPlans.com previously talked about Intel in these two blog posts: how the future of computing is taking on a life of its own, as well as, outa time — the tic toc of Intel and modern computing. It will be interesting to see if Intel remains standing in 2040 or 2050.

References

Jordan, E. A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi method (3442759) [Doctoral dissertation, University of Phoenix]. UMI.

Learning curve. (2020, April 7). WIKIPEDIA: The Free Encyclopedia. Retrieved April 21, 2020, from https://en.wikipedia.org/wiki/Learning_curve

Newton’s laws of motion. (2020, March 8). WIKIPEDIA: The Free Encyclopedia. Retrieved April 19, 2020, from https://en.wikipedia.org/wiki/Newton’s_laws_of_motion

Rhines, W. (2019). Predicting semiconductor business trends after Moore’s law. A SemiWiki.com Project. https://semiwiki.com/

Rhines, W. (2020, April 1). Predicting semiconductor business trends after Moore’s law [Webinar]. SemiWiki. https://app.gotowebinar.com/unified/index.html#/webinar/4339831074971239694/attend/3541623017104899086

COVID in the US

We talked about how scenario planning would and should have help see this pandemic, and have early warning signs for continuing plans.

It’s April 20, 2020 and the death rate from Coronavirus in the US is at about 2,000 people per day. The number of cases is not increasing as rapidly, but still rising each day.

The US has only 4.25% of the world’s population, yet we have 32% of the cases and 25% of the world’s deaths. Ouch.

Of the many places you can go to get the stats WorldOmeters.info seems pretty good: https://www.worldometers.info/coronavirus/

Here’s the grim body count.

Date:4/20/2020
Cases/Dead%/World
World2,481,287100.0%
 … deaths170,4366.9%
6.9%%/World
US792,75931.9%
 … deaths42,51424.9%
Deaths %5.4%

Based on our population we should have only about 100,000 cases, not almost 800,000. The death rate in the US should be more like 7,000.

At least our death rates are 5.4%, are much lower than the average of 6.9%.

Give a look at these two posts. One for scenario planning, why didn’t we in the US have early warning signals? The second identifies how Trump got rid of teams and scientist who would have really helped minimize the impact of COVID.

Turns out the late notification, and slow reaction has been painful. Maybe 8 times as many lives lost so far as we should have lost.

PREDICT a Pandemic: First Kill those Pesty Scientists

Rough Road Ahead

Oh My God!

Trumps administration completely stopped the PREDICT program that did USAID training and response world-wide for pandemics.  Since the Bird Flu of 2005 (H5N1), the US presidents (Bush II and Obama) have moved toward building a program to identify potential pandemics and to help countries (including the USA) deal with such an eventuality. Of course, the PREDICT program got to deal with several pandemic-type events including SARS, MERS, Ebola and even Zika (mosquito). The idea, which apparently worked very well, is to fight a pandemic where it originates in other countries, so that you don’t have to fight it here in the USA. Of course, the train-the-trainer program would be developed and applied here in the USA.

Foreign aid through the US Agency of International Development (USAID) was frozen by the first few days of August 2019 (Trump administration freezes foreign aid funds pending review). The money of $2B to $4B that was unspent was to be frozen. An obvious indication that the aid programs and the employees associated with them were moving to temporary status. The end of PREDICT was predicted.

Donald McNeil discusses the closure of the PREDICT program in October 25, 2019:

“The program, known as Predict and run by the United States Agency for International Development, was inspired by the 2005 H5N1 bird flu scare. Launched 10 years ago, the project has cost about $207 million.

The initiative has collected over 140,000 biological samples from animals and found over 1,000 new viruses, including a new strain of Ebola. Predict also trained about 5,000 people in 30 African and Asian countries, and has built or strengthened 60 medical research laboratories, mostly in poor countries.”

McNeil, October 25, 2019

A letter to the USDA Administrator Mark Andrew Green summarizes, with footnotes, the phantom shutdown of the PREDICT program in 2019, with the promise of “something” to replace it going out for bid. The letter of Jan 30, 2020 by Senators Warren and King documents and articulates the key questions that arise for killing off this program to deal with pandemics. The senators’ full letter can be read HERE. [Yes, I know, they are mouthy Democrats, but check their footnotes. Footnotes in a letter!?]

In a prior ScenarioPlans blog post, the question was posed: How is it possible that we in the USofA did not have early warning signposts about the COVID-19 back in November or December of 2019?  Pandemic Scenario in Trump Transition Team, Days before Office. A practice drill on a hypothetical pandemic scenario was apparently wasted on the incoming Trump transition team.

The Real Donald Trump admiration in action has systematically pared back government organizations and killed off the scientists. Here’s what two past scientist, former undersecretaries of research, education and economics, say about Trumps war on science and scientist. When Trump simply cut the budgets for departments, the budget was restored by congress. So, apparently, new approaches were needed. Within the USDA, he relocated DC scientists and researchers to Kansas City, with a couple weeks’ notice. (Read about Mulvaney’s praise of the effective firing of half of the DC staff by USDA’s head Sonny Purdue.) Now he apparently shut down an entire global pandemic department, while also closing down the Pandemic Reponse team in the White House. What could go wrong?

If you were a bet’n man, and we know that Trump-the-Casino-Man is, you would expect to get 2 to 5 years before some pandemic swipes through the world. Even then, what’s the chance of it being a really, really bad pandemic?

The thing with scenario planning, is that you build detection mechanism (signals and signposts) so you identify potential changes in the future. Then as part of the scenario planning process (every couple years typically), you develop contingency plans including modifications to disaster recovery plans (DRPs).

Or, you simply kill off (fire) the voices of science and fact.

Pandemic Scenario in Trump Transition Team, Days before Office

Okay!. So the Obama administration left the incoming Trump administration several scenarios for them to think about. No evidence as to what happened to the final report. But apparently, it showed everything that we have seen since November in China, and the first outbreaks outside of China. The result was a world-wide pandemic. Overwhelming the US with supply shortages and patent overflows. What happened to the final report? The early warning signposts? The disaster (recovery) plans?

We did a blog post about the military planning scenarios that would have realized a pandemic as an act of war… Or, even if it wasn’t caused by an act of war, the resulting story line would be similar. And, of course, if the pandemic started elsewhere, there would signals and signposts, in the jargon of Shell related to early warning signs in scenario planning.

A major country should be doing military scenario planning for several topics. None is quite as huge and integrated as climate change. Look at this report produced by The National Security, Military, And Intelligence Panel on Climate Change (NSMIP, February 2020), A Security Threat Assessment of Global Climate Change: How Likely Warming Scenarios Indicate a Catastrophic Security Future. It doesn’t seem possible that the US would not have a plan like this for a pandemic? It also doesn’t seem possible that the US would not have multiple plans for economic recession, no matter the cause. Since we missed the boat on the pandemic, hopefully we didn’t miss planning for a recession… However, this recession is like none we have ever had before. Kind of like turning the switch off on most of the economy for a while. (How long is a “while” is a $4T question?)

Kiaser Health News did a composite of highlights from several articles about the Trump transition team and how badly they apparently failed in the Pandemic exercise: How A Crisis Simulation Run Before Trump’s Inauguration By Obama’s Team Eerily Mirrors Current Outbreak!

November in China. By December, the military and health official had to know that expansion into a global pandemic was was not only possible, but likely. In Florida, when we see a Hurricane coming, we dust off all of our contingency plans for the businesses and start buying toilet paper and canned goods. We make sure that we have lots of jugs to fill with water if needed (no need to buy water, by the way).

Did the military and health officials simply forget to mention these things up the chain of command, or — apparently more likely — did the higher-up-the-chain-of-command not listen? What the hell happened to the results of the simulation? Did it take the path of anything and everything that Obama did/mentioned/signed/said, flushed down the toilet?

Military Scenario Planning

It is hard to imagine a world where the (US) military did not make plans for a pandemic like the coronavirus (COVID-19). Each and every military must have a plan for weaponized bio-warfare. In fact, every military will have their own plans for ways they can weaponize biowarfare. Think about the types of bioweapons the terrorist groups might want to employ?

The military has been planning for the big issues of global warming and has been shouting out that climate change is one of the biggest risks to the world in the future. Droughts and rising sea levels will produce mass instability in regions, much along the line of the human tragedies in Chad, Sudan, Syria, etc. For decades now, the military has warned of the risks of climate change on US national security. Pentagon, for example, here.

With the pandemics that have passed through (Ebola, SARS, mosquito-borne) over the last 10-20 years, this too is a national security threat.

In the spirit of Scenario planning, setting up sign-posts and early warning signs, you have to wonder when the military started to escalate the coronavirus outbreak in China to the highest risk levels of world, and therefore US pandemic. November? December? The military would already have contingency plans to help other countries. By early December 2019, the signposts were visible for a spread from China to the rest of the world. By mid December, the US mainland would have been clearly at risk.

The power of having scenario plans, early warning signs, and contingency plans, can break down anywhere along the line. All of the planning in the world is useless, if you don’t react and implement.

Y2K Scenarios

Scenario Planning when the Official View of the Future is Uncertain

Scenario planning should be back in focus. We go a few years – 10 years now since the Great Recession – and we think that the current trajectory, or the “Official View”, should be consistent this time. But the corona virus brings that all back into focus, even though people are probably not taking it as seriously as they probably should. You have to look at the entire supply chain forward and backward. China plays a major role in many of the world’s supply chains. End consumers on the one hand; production supply chain on the sourcing side. If factories are closed, if people can’t go to work, if people don’t go out and buy the consumption and the supply chain get continually interrupted. China is initiating all kinds of stimulus. Telling banks to be forgiving on impacted factories seems like a good idea; no one wants the factories to go out of business because of such an exogenous event such as the virus. But other stimulus will be rather useless.
Probably no one knows, yet, how this epidemic will play out. There’s no reason to believe that this won’t be rather long and protracted for China. The consequences for China will ripple throughout the world. With a world that is densely (over) populated, there is no reason to believe that such outbreaks will not happen other places, and more frequently.
So, this brings us back to scenario planning. The advantage of scenario planning is that you can build Contingency or Disaster Recovery Plans based on various scenarios. Serious and protracted supply chain disruptions, no matter the cause, seem like logical scenarios.

Right now might be a good time to dust off the Contingency Plans and see if anything needs to be updated, or executed, because of the recent events.
In the 2018 Guide by Hall and Hinkelman, the scenario chapter discusses Y2K as the greatest scenario planning exercise in history. Read about the Y2K Scenario from that chapter (pp. 161-163). Remember that right now many companies are executing their contingency plans related to current events, many others are trying to develop them on the fly – kind of a fly-by-night approach to scenario planning.

<*This section below is reproduced here with permission of the authors.*>
The Great Scenario Planning Exercise, Y2K!
There were several major advantages to corporations’ planning – scenario planning really – that came out of the Year 2000 (Y2K) preparation process. Planners were forced to consider at least two views of the future: the official view where Y2K caused no interruptions, and the view of chaos where it caused massive interruptions (mainly because of sustained interrupts to the power grid). One of the interesting parts of this process is the spillover implication – legally, morally and brand-image-wise – of doing nothing in preparation and being wrong. The scenario planning processes associated with Y2K resulted in stronger business planning and improved disaster recovery plans (DRPs). It also helped with business continuity plans by building stronger relationships with critical business partners.
Many people would say that this is a bad example because Y2K was a bust. Actually, the major push to organize IT and transition from legacy systems has substantially contributed to increased productivity for several years after the turn of the century. Business productivity has been surprisingly low since about 2005. Two examples where the Y2K efforts proved to be well justified are Burger King and FPL.
Burger King Corporation, then a division of DIAGEO, worked very closely with franchisees and its most critical suppliers (beef, buns, fries and Coke) to make sure that there would be no interruption and that contingency plans would be in place for likely situation related to Y2K. By far the biggest risk, and the most attention to contingency planning, went to AmeriServe. AmeriServe was the number one supplier to the Burger King system that had bought out the number two supplier and now represented three-fourths of the global supply chain. Three weeks into the new Millennium, AmeriServe declared bankruptcy! The contingency plans related to distribution had fortunately been dramatically improved during 1999 and continuity actions were immediately executed. Although it had nothing to do with Y2K, per say, much if not the entire contingency plan could be used for any distributor outage.
An adjunct to the Y2K story relates to power. Once organizations got past addressing their critical IT systems, the biggest wild card was power outages. No assurances came from the power companies until just months before the turn of the millennium, and even then, not much was given in the way of formal assurances. Of course, that was too late for a big organization with brand and food safety issues to have avoided the major contingency planning efforts.
Most people did not realize how fragile and antiquated the entire power grid was until the huge Ohio, New England and Canadian black out August 14, 2003 (CNN). A cascading blackout disabled the Niagara-Mohawk power grid leaving the Ottawa, Cleveland, Detroit and New York City region without power. There was a shutdown of 21 power plants within a three-minute period because, with the grid down, there was no place to send the power. Because of a lack of adequate time-stamp information, for several days Canada was believed to be the initiator of the outage, not Iowa.
There have been similar blackouts in Europe. That Y2K could have resulted in massive outages may not have been so far-fetched after all. Ask someone who was stuck in an elevator for eight hours if the preparations for long-term power outages could have been better.
Hall (2009) developed a survival planning approach that would help an organization survive during times of extreme uncertainty, like the Great Recession. Of course, the process is far ahead if the organization already has a good strategic plan (StratPlan) that includes contingency and scenario planning.

References

Hall, E. (2009). Strategic planning in times of extreme uncertainty. In C. A. Lentz (Ed.), The refractive thinker: Vol. 1. An anthology of higher learning (1st ed., pp. 41-58). Las Vegas, NV: The Lentz Leadership Institute. (www.RefractiveThinker.com)
Hall, E. B. & Hinkelman, R. M. (2018). Perpetual Innovation™: A guide to strategic planning, patent commercialization and enduring competitive advantage, Version 4.0. Morrisville, NC: LuLu Press. ISBN: 978-1-387-31010-4 Retrieved from: http://www.lulu.com/spotlight/SBPlan

Out of Control Healthcare Costs, Delinkage may help?

We have a new blog post in IPZine about trying to control healthcare costs by taking a new twist on the linkage in BIG phara to patent protection. Check that out this article on delinkage of intellectual property protection.

In 2017 we talked about scenarios that jump out at you.

Scenarios that really stand out, including compounding effects.

One that always is front-and-center is the out-of-control escalation of healthcare costs in the US, now up to 18% of GDP. In an Nov 20 2019 blog over at IPZine there’s discussion of “delinkage” related to pharma patents that has some potential for taming the out-of-control healthcare costs.  Included in that blog post is a discussion of how long it will take before healthcare costs escalate from 18% of GDP (approx. $3.6T of the $20T GDP) to 50% of GDP, and even 100% of GDP?

Here is some of the math. You can do your own figures. Assume that Healthcare costs increase by 10% per year as they have for decades (even though that rate is lower currently). Say that GDP growth is 2.5% and inflation is 2% (real GDP growth is =+0.5%). How many years before all healthcare costs in the US reach 25%, 50%, 75% and even 100% of the US GDP!???

Year Description (+10%) Targe%GDP # of Years
2025 Years til % of GDP 25% 4.5
2034 Years til % of GDP 50% 14.1
2040 Years til % of GDP 75% 19.7
2044 Years til % of GDP 100% 23.7

That’s right, with 4 or 5 years, the total healthcare costs of the US could be 25% of GDP. In 14 years it could be 50%, and in 20 years it could represent 75% of GDP. If this doesn’t scare you into taking some actions, then you obviously don’t understand the magnitude of the problem! This was the problem that we faced for decades when Healthcare costs were increasing at 10% or more each year.

Okay, so healthcare costs are lower now since the Great Recession; let’s say they may have dropped to 5% to 7.5 increase per year (2 to 3 times CPI inflation).

At 5% healthcare inflation:

Year Description (+5%) Targe%GDP # of Years
2033 Years til % of GDP 25% 13.3
2061 Years til % of GDP 50% 41.4
2078 Years til % of GDP 75% 57.8
2089 Years til % of GDP 100% 69.4

Note that it is no longer 4 or 5 years to reach 25% of US GDP, it takes more like 13 years. It takes 40 years to reach about 50% of GDP.

When you consider that the US spends 4 times what the rest of the world spends on healthcare (about $10k) and more than twice what the typical developed country spends… For outcomes that are no better… Some place in here we need to rethink.

Hall and Knab (2012) outlined 10 other items besides healthcare costs that were non-sustainable trends/practices that appeared to have compounding and accelerating forces at play. The (US) Federal deficit is one. Each of those scenarios loom as large or larger today than back in 2012.

#scenario #healthcare #gdp #compounding #ipzine #patents #intellectualproperty

References

Hall, E., & Knab, E.F. (2012, July). Social irresponsibility provides opportunity for the win-win-win of Sustainable Leadership. In C. A. Lentz (Ed.), The Refractive Thinker: Vol. 7. Social responsibility (pp. 197-220). Las Vegas, NV: The Lentz Leadership Institute. (Available from www.RefractiveThinker.com, ISBN: 978-0-9840054-2-0)

Democratization of Power

SustainZine (SustainZine.com) blogged about a rather cool idea on the decentralization of power (here). The idea in Nature Communications is to have buildings everywhere use their renewable power sources to generate a biofuel of some type. And the authors had the Heating Ventilation and Air Conditioning (HVAC) unit extract CO2 from the atmosphere to generate the fuel. Some of the technologies they pointed to were new-er technologies that are now (hopefully) making their way into main-stream. (Read the nice summary article in Scientific American by Richard Conniff.)

Basically, everyone everywhere can now produce their own power at rates that are a fraction of lifelong utility power. Storage is now the big bottle neck to completely avoiding the grid. The distributed power should only be a big plus to the overall power grid; however, the existing power monopolies are still resisting and blocking. So complete self-containment is not only a necessity for remote (isolated) power needs, but a requirement in order to break away from the power monopolies.

In the US, there is the 30% Renewable Investment Tax Credit which makes an already good investment even better for homeowners and businesses. Plus, businesses can get accelerated depreciation making the investment crazy profitable after accounting for the tax shield (tax rate times the basis of the investment). Many of the states also sweeten the deal even more. But the 30% tax credit starts to reduce after 2019, so the move to renewable starts to drop off precipitously at the end of 2019.

You would think that the power companies would join in the solutions, and not spend so much time (and massive amounts of money) on obstructing progress. All those tall buildings that are prime candidates for wind. Think of all the rooftops, roads and parking lots worldwide that are prime candidates for solar. Distributed power. As needed, where needed. No need for new nuclear, coal or nat-gas power plants. Little need for taking up green fields with solar farms.

Of course, the oil, coal and gas companies need the perpetual dependence on the existing infrastructure. When we all stop the traditional fossil fuel train — and all indications from the IPCC show that we must stop that train sooner, not later — then all the oil and gas in the world will need to stay in the ground. Call me an optimist, or a pessimist, but I would not buy oil or gas for almost any price. I definitely wouldn’t buy into the Saudi-owned oil company spinoff.

It is probably a mistake to think that technology to take CO2 out of the atmosphere after the fact can repair past sins. Avoiding putting pollution into the air, water and land — the negawatt and the negagallon, in this case — are by far the best approach.

In Sustainzine, BizMan concluded with this thought about the here-and-now scenario, not in the future at all:

“Hidden in this whole discussion is that scenario that is here and now, not futuristic. Renewable energy is cheaper and massively cleaner than conventional energy, and it can be located anywhere. Storage, in some form, is really the bottleneck; and storage in the form of synthetic fuels is a really, really cool (partial) solution.

References

Dittmeyer, R., Klumpp, M., Kant, P., & Ozin, G. (2019, April 30). Crowd oil not crude oil. Nature Communications. DOI: 10.1038/s41467-019-09685-x

The Future of Computers and Quantum Computing Part Duex

On April 4, 2019 the DC chapter of the IEEE Computer Society Chapter on Quantum Computing (co-sponsored by Nanotechnology Council Chapter) met to see a presentation by and IBM researcher named Dr. Elena Yndurain on the subject of recent efforts by that company in the realm of quantum computing. I was fortunate enough to be able to attend. I was hoping the presentation would be technical enough to be able to better understand the basics of quantum computing in the sense of a future time-line of when this new technology would be ready for the market place as defined during the course of my own research (Jordan, 2010) which is to say that a working prototype would be ready for full-scale testing. I was disappointed.

During the set-up for the real purpose of the talk, the presenter stated that the phases of quantum computing could be thought of as being in three phases of increasing complexity: (a) quantum annealing; (b) quantum simulation; and, (c) universal quantum computing. Ultimately, the goal would be (c). But the current state of the technology is (a).

It was also stated that there were essentially three possible technologies for quantum computing: (a) super conducting loops; (b) trapped ions; and, (c) topological braiding. Both (a) and (c) require cryogenic cooling. The IBM device uses technology (a) that is cooled down to 15 miliK0 (whew!). Technology (b) involves capturing ions in an optical trap using lasers. This technology operates at room temperature but suffers from a signal-to-noise problem that (a) does not. Technology (c) was not discussed.

The IBM device is a 50-qubit machine. The basic functionality of the device is predicated on Shor’s algorithm (Shor’s algorithm, 2019) and Grover’s search algorithm (Grover’s algorithm, 2019). These mathematical algorithms were developed during the 1990s. They are complex functions so there is a real part and an imaginary part. When queried the presenter stated the gains achieved by this so-called quantum annealing device were from the simplicity of the computation not the speed of the processor. The presenter went on to say that the basic algorithms had been coded in Python (Python (programming language), 2019).

That the IBM device is based on a 50-qubit processor struck me as being a bit coincidental. Recall from my first post on this subject, there has been an effort (by some unidentified group) to develop a fault-tolerant 50-qubit device since 2000. As of the publication of the paper this had not been achieved (Dyakonov, 2019). When I asked about this, the presenter simply stated that the IBM device was fault-tolerant but declined to offer any specific statistically based response. It should be stated that, during the presentation, Dr. Yndurain remarked that information included was cherry-picked [my words, not hers] to put things in the best light. Why?

During the presentation, what became clear is that IBM is building an ecosystem around the 50-qubit device. They have rolled this thing about as the “Q” computer. In order to gain access to the device, researcher must “subscribe” to the IBM service or simply “get in the que”. One also has to go through a training/vetting process to be able to develop the particular program the researcher needs to solve a particular problem. Seriously?

It seems to me this leaves two fundamental questions on the table: (a) will quantum computing be the next great disruptive innovation that supplants silicone dioxide (Schneider, The U.S. National Academies reports on the prospects for quantum computing, 2018) (Schneider & Hassler, When will quantum computing have real commercial value? Nobody really knows, 2019) (Simonite, 2016); (b) What was the point of the presentation?

My answer to the first question is that I remain skeptical. When queried, the presenter said that the materials used were proprietary and would not be available for use by the audience. I will also say that there was a notable lack of specific information in the presentation materials that could be verified. This suggests the answer to the second question: the point of the presentation was a sales pitch. IBM seems to be building an ecosystem around this 50-qubit device that will solidify market share for what was admittedly the very earliest stage of quantum computing. IBM seems to be continuing in the tradition of Moore’s law being a social imperative not a physics-based phenomenon.

References

Dyakonov, M. (2019, March). The case against quantum computing. IEEE Specturm, pp. 24-29.

Grover’s algorithm. (2019, April 5). Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Grover%27s_algorithm

Jordan, E. A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. Doctoral Dissertation. AZ: University of Pheonix.

Python (programming language). (2019, April 7). Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Python_(programming_language)

Schneider, D. (2018, Dec 5). The U.S. National Academies reports on the prospects for quantum computing. Retrieved from IEEE Spectrum: https://spectrum.ieee.org/tech-talk/computing/hardware/the-us-national-academies-reports-on-the-prospects-for-quantum-computing

Schneider, D., & Hassler, S. (2019, Feb 20). When will quantum computing have real commercial value? Nobody really knows. Retrieved from IEEE Spectrum: https://spectrum.ieee.org/computing/hardware/when-will-quantum-computing-have-real-commercial-value

Shor’s algorithm. (2019, April 7). Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Shor%27s_algorithm

Simonite, T. (2016, May 13). Morre’s law is dead. Now what? Retrieved from MIT Technology Review: https://technologyreview.com

The Future of Computers and Quantum Computing

Do you know what Gordon Moore actually said? In 1965 Gordon Moore observed that if you graphed in the increase of transistors on a planar semiconductor device using semi-log paper, it would describe a straight line. This observation ultimately became known as Moore’s law. The “l” is lower case in the academic literature because the law is not some grand organizing principle that explained a series of facts. Rather it was simply an observation. Moore adjusted the pronouncement in 1975 to set the vertical scale at every two years (Simonite, 2016). This so-called law has been the social imperative that has fueled innovation in the semiconductor manufacturing industry for well over 50 years. But it was a social imperative only (Jordan, 2010). It was clear from the beginning that the physics of the material would eventually get in the way of the imperative.

There is a physical limit to how far you can shrink the size of the individual devices using silicon dioxide, the underlying material of which all our electronics is made. That limit appears to be about 10 nanometers (Jordan, 2010; Simonite, 2016). There are also other more practical reasons why this limit may be unachivable such as heat disapation (Jordan, 2010). Although, given the cell phone industry seems to be driving the technology of late, significant strides have been made in reducing power consumption of these devices. This lower power consumption implies less heat generation. It also seems to imply getting away from a purely Van Neuman computational architecture toward a more parallel approach to code execution.

This brings us to the fundamental question: what technology is next? When will that technology emerge into the market place? My own research into these questions resulted in some rather interesting answers. One of the more surprising responses was the consensus about what was meant by emerging into the market place. The consensus of the Delphi panel I used in my research was when there was a full scale prototype ready for rigorous testing (Jordan, 2010). One of the most surprising answers addressed the consensus about what the technology would be that replaces silicon dioxide. My research suggests the replacement technology would be biologic in nature, RNA perhaps? The research also suggests this new technology would certainly emerge within the upcoming 30 years (Jordan, 2010). Given the research was conducted nine years ago, this suggests the new technology should be ready for full-scale prototype testing in about 20 years from now. I will address why this time frame is of significance shortly.

It turns out that this question of using RNA as a computational technology is being actively investigated. It would be difficult to predict to what extent this technology may mature over the next 20 years. But, in its current state of development, the computational speed is measured on the scale of minutes (Berube, 2019, March 7). Ignoring the problem of how one might plug a vat of RNA into a typical Standard Integrated Enclosure (SIE) aboard a US submarine, speeds on that scale are not particularly useful.

The Holy Grail of the next generation of these technologies is undoubtedly quantum computing (Dyakonov, 2019). There seems to be a lot of energy behind trying to develop this new technology with a reported “…laboratories are spending billions of dollars a year developing quantum computers.” (Dyakonov, 2019, p. 26). But we are left with the same question of when? Dyakonov divides projections into optimistic and “More cautious experts’ prediction” (p. 27). The optimists are saying between five and 10 years. The so-called more cautious prediction is between 20 and 30 years. This more cautious realm fit with my research as well (Jordan, 2010).

The real problem with achieving a working quantum computer is the shear magnitude of the technical challenges that must be overcome. In a conventional computer, it is the number of states of the underlying transistors that determine the computational ability of the machine. In this case a machine with N transistors will have 2N possible states. In the quantum computer, the device is typically the electron that will have a spin of up or down.  The probability of a particular electron spin being in a particular state varies continuously where the sum of the probability of up and the probability of down equaling 1. The typical term used to describe a quantum device used in this way is the “quantum gates” (Dyakonov, 2019, p. 27) or qubits. How many qubits would it take to make a useful quantum computer? The answer is somewhere between 1,000 and 100,000 (Dyakonov, 2019). This implies that to be able to make useful computations a quantum machine would have to something on the order of 10300 qubits. To illustrate how big a number that is I quote: “it is much, much greater than the number of sub-atomic particles in the observable universe.” (Dyakonov, 2019, p. 27). The problem is that of errors. How would one go about observing 10300 devices and correcting for errors? There was an attempt in the very early years of this century to develop a fault-tolerant quantum machine that used 50 qubits. That attempt has been unsuccessful as of 2019.

The basic research being done is of considerable value and much is being learned. Will we ever see a full-scale prototype ready for rigorous testing? I am beginning to doubt it. I am of the opinion that a usable quantum computer is not unlike controlled fusion: the ultimate solution, but always about 10 years out. So next year, our quantum computer (and controlled fusion for that matter) will not be nine years out but still another 10 years.

 

References

Dyakonov, M. (2019, March). The case against quantum computing. IEEE Specturm, pp. 24-29.

Jordan, E. A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. Doctoral Dissertation. AZ: University of Pheonix.

Simonite, T. (2016, May 13). Morre’s law is dead. Now what? Retrieved from MIT Technology Review: https://technologyreview.com

 

 

Page 1 of 3

Powered by WordPress & Theme by Anders Norén