Scenario Plans (& Delphi Research)

I looked into the future, and the time to act is now.

Category: Horrizon Planning


The Future of Computing Is Taking on a Life of Its Own

Previously, we talked about the Tic-Toc of computing at Intel, and how Gordan’s law (Moore’s law) of computing – 18 months to double speed (and halve price) – is starting to hit a brick wall (Outa Time, the tic-toc of Intel and modern computing). Breaking through 14 nanometer barrier is a physical limitation inherent in silicon chips that will be hard to surpass. Ed Jordan’s dissertation addressed this limit and his Delphi study showed what the next technology might likely be, and how soon it might be viable. His study found that several technologies were looming on the horizon (likely less than 50 years)… and that organic (i.e. proteins) was the most promising, and should certainly happen sometime in the next 30 years.

Apparently quantum computing technology is here and now– kinda – especially at Google. See Nicas (2017) WSJ article about Quantum computing in the Future of Computing. As the article states about the expert Nevens, he’s pretty certain that no one understands quantum physics. At the atomic level, a qubit can be both on and off, at the same time. The conversation goes into parallel universes and such… Both here and there, simultaneously. The Quantum computer is run in zero gravity, at absolute zero temperature (give or take a fraction of a degree). Storage density using qubits is unimaginable. The computer works completely differently, however, based on elimination of the non-feasible to arrive at good answers, but not necessarily the best answer. Heuristics, kinda. The error rate is humongous, apparently, requiring maybe 100 qubits in error correction associated with a single working qubit.

Ed Jordan was reminiscing about quantum computing yesterday… “Basically, all computing in all its permutations need to be rethunk. Quantum computing is sort of the Holy Grail. One could argue it is sort of like control fusion: always just 10 years away. Ten years ago, it was 10 years away. Ten years from now it may still be ten years away. There is a truck load of money being thrown at it. But there isn’t anything mature enough yet to do anything that looks like real computing. The problem is how do you read out the results? Like Schrödinger’s cat, that qubit could be alive or dead, and by looking at it you cause different results to happen – as opposed to something that exists independent of your observation.”

Quantum computing is now moving past the technically impossible into the proved and functional, and maybe soon to be viable. The players in this market are Google (Alphabet), IBM and apparently the NSA (if whistle blower Snowden is to be believed.)

Intel may not be able to capitalize on the next generation of computing.  Some computations, such as breaking encryption, can probably be done in a couple seconds on a quantum computer, even though it might take multiple current silicone computers a lifetime. There are several potential uses of the quantum computer that make businesses and security targets very nervous.

Jordan and Hall (2016) talk about using Delphi to anticipate deflection points that are possible on the horizon, including those scenarios that would be possible via quantum computing, or bio-computing for that matter. The use of experts or informed people could make the search for such deflection points more evident, and the ability to develop contingency plans more effective.

One of the most interesting things in the Nicas article is a look at the breakthroughs in computing technology, and comparing them to Jordan’s 2010 dissertation. He found that two or three types of technology should likely be feasible within 25 to 40 years and viable in application within about 30 to 50 years. In his case that would be as early as about 2040. Note that the experts discussed by Nicas were pegged to have full application of a quantum computer by about 2026; that is when digital security will take on a whole new level of risk. It also makes you wonder how block-chain (bitcoin) will fare in the new age of supersonic computing.

This seems like a great time to start working of security safeguards that are not anything like the current technology? Can you imagine the return of no-tech or lo-tech? Kinda reminds you of the revival of the old “brick” phones for analog service (in the middle of the everglades).


Debnath, S., Linke, N. M., Figgatt, C., Landsman, K. A., Wright, K., & Monroe, C. (2016). Demonstration of a small programmable quantum computer with atomic qubits. Nature, 536(7614), 63–66. doi:10.1038/nature18648

Jordan, Edgar A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. (Doctoral Dissertation). Available from ProQuest dissertations and Theses database. (UMI No. 3442759)

Jordan, E. A., & Hall, E. B. (2016). Group decision making and Integrated Product Teams: An alternative approach using Delphi.  In C. A. Lentz (Ed.), The refractive thinker: Vol. 10. Effective business strategies for the defense sector. (pp. 1-20) Las Vegas, NV: The Refractive Thinker® Press. ISBN #: 978-0-9840054-5-1. Retrieved from:

Nicas, Jack (2017, November/December). Welcome to the quantum age. The future of Computing in Wall Street Journal. Retrieved from:

Intel and Mobile Computing: An Eye on BIG Computing on the Move

We are rapidly moving to one of the most disruptive innovations in modern computing. Truly mobile computing. The Driver-less car. These cars are going to have a lot of computing power on-board. They will need to be self contained, after all, if going through a tunnel or parking lot. But they will be amassing massive amounts of data as well, 4 terabytes of data per day for the average self-driving car. Wow. And current mobile data plans start to charge you or throttle you after about 10MB of data usage per month.

Read about this in a great WSJ article by Greenwald on March 13. It focuses on the companies in play and the new bid by Intel to buy MobilEye for $15.3B, the look-around and self driving technology going into GM, VW & Honda cars. The 34% premium shows how important this tech is to the slumbering Tel Giant.

What’s all the fuss about driver-less cars? How does going Driver-Less impact the future: what are potential interruptions, problems and/or discontinuities? How could this technology alter the strategic plans for many market leaders?

It seems likely that the majority of Americans will reject using/supporting driver-less vehicles… for a while.  It removes individual control, emasculates the sense of manly power while removing decision making.  One cannot demonstrate a charged-up ego to a potential partner when a computer and sensors are driving the speed limit behind a school bus.  A driver can suddenly opt for a shortcut or a scenic route that he knows by heart.  Not so the driver-less vehicle. However, Tesla drivers have already been reprimanded, for letting the car do too much of the driving, under too many unusual circumstances.

Just a few things to think further about: Long-Haul Trucking and Enabling Technologies.

Long-haul trucking. There is a major shortage of truck drivers. Labor rules don’t let drivers do long hauls without breaks or rest. So long haul driving often uses two drivers for the same truck that is going coast to coast. If the truck needs to stop and drop along the way, however, then a person on-board, might still be necessary. However drops and pickups usually have someone there at the warehouse who can assist. How will the truck fuel itself up at the Flying-J truck stops? If we can fuel up fighter jets in mid air, we can figure out how to fuel up a driver-less truck. One obvious solutions – or not so obvious, if you’re not in the habit of longer-term and sustainable thinking – is to move to electric trucks and a charging pad. Simply drive the electric truck over a rapid-charging pad. Rapid-charge technology is already generally available using current technologies (especially with minor improvements in batteries and charging).

Enabling Technology Units (ETUs).  The MobilEye-types of technology apply to lots and lots of other situations, such as trucks, farm tractors, forklifts, etc. Much of the technology being developed for the driver-less car is what Hall & Hinkelman (2013) refer to as Enabling Technology Units (ETUs) in their Guide book to Patent Commercialization. The base technologies have many and broad based applications beyond the obvious direct market application. It is the Internet of things, when the “things” are mobile, or when the “things” around it are mobile, or both. This is an interesting future of mobile computing.


Hall, E. B. & Hinkelman, R. M. (2013). Perpetual Innovation™: A guide to strategic planning, patent commercialization and enduring competitive advantage, Version 2.0. Morrisville, NC: LuLu Press. ISBN: 978-1-304-11687-1  Retrieved from:

Scenarios Now and the Genius (hidden) within Crowd

It’s been about 10 years since the Great Recession of 2007-2008. (It formally started in December of 2007.) A 2009 McKinsey study showed that CEOs wished that they had done more scenario planning that would have made them more flexible and resilient through the great recession. In a 2011 article, Hall (2011) discusses the genius of crowds and group planning – especially scenario planning.

The Hall article spent a lot of time assessing group collaboration, especially utilizing the power available via the Internet. Wikipedia is one of the greatest collaboration – and most successful – tools of all time. It is a non-profit that invokes millions of volunteers daily to add content and regulate the quality of the facts. In this day of faus news, Wikipedia is a stable island in the turbulent ocean of content. Anyone who has corrections to make to any page (called article) is encouraged to do so. However, the corrections need to fact-based and source rich. Unlike a typical wiki, where anything goes, the quality of content is very tightly controlled.  As new information and research comes out on a topic, Wikipedia articles usually reflect those changes quickly and accurately. Bogus information usually doesn’t make it in, and bias writing is usually flagged. Sources are requested when an unsubstantiated fact is presented.

Okay, that’s one of the best ways to use crowds. People with an active interest – and maybe even a high level of expertise – update the content. But what happens when the crowd is a group of laypeople. Jay Leno made an entire career from the “wisdom” of people on the street when he was out Jay Walking. The lack of general knowledge in many areas is staggering.  Info about the latest scandal or gossip by celebs, on the other hand, might be really well circulated. So how can you gather information from a crowd of people where the crowd may be generally wrong?

It turns out that researchers at MIT and Princeton have figured out how to use statistics to figure out when the crowd is right and when the informed minority is much more accurate (Prelec, Seung & McCoy, 2017).  (See a Daniel Akst overview WSJ article here.) Let’s say you are asking a lot of people a question in which the general crowd is misinformed. The answer, on average, will be wrong. There might be a select few in the crowd who really do know the answer, but their voices are downed out, statistically speaking. These researchers took a very clever approach; they ask a follow-on question about what everyone else will answer. The people who really know will often have a very accurate idea of how wrong the crowd will be. So the questions with big disparities can be identified and you can give credit to the informed few while ignoring the loud noise from the crowd.

Very cool. That’s how you can squeeze out knowledge and wisdom from a noisy crowd of less-than-informed people.

The question begs to be asked, however: Why not simply ask the respondents how certain they are? Or, maybe, ask the people of Pennsylvania what their state capital is, not the other 49 states who will generally get it wrong. Maybe even put some money on it to add a little incentive for true positives combined with costly incorrect answers such that only the crazy or the informed will “bet the farm” on answers where they are not absolutely positive?

But then, that too is another study.

Now, to return to scenario planning. Usually with scenario planning, you would have people that are already well informed. However, broad problems have different silos of expertise. Maybe a degree of comfort or confidence would be possible in the process of scenario creation. Areas where a specific participant feels more confident might get more weight than other areas where their confidence is lower. Hmm… Sounds like something that could be done very well with Delphi, provided there were well informed people to poll.

Note scenarios are different from probabilities… Often scenarios are not high probabilities… You are usually looking at possible scenarios that are viable… The “base case” scenario is what goes into the business plan so that may be the 50% scenario; but all the other scenarios are everything else. The base case is only really likely to occur if nothing major changes in the macro and the micro economic world. Changes always happen, but the question is, does the change “signal” that the bus has left the freeway, and now new scenario(s) are at play.

The average recession occurs every 7 years into a recovery. We are about 10 years into recovery from the Great Recession. Of course, many of the Trump factors could be massively disrupting. Not to name them all, but on the most positive case, a 4% to 5% economic growth in the USA, should be a scenario that every business should be considering. (A strengthening US and world economy may, or may not, be directly caused by Trump.) The nice thing about having sound scenario planning, as new “triggers” arise, they may (should) lead directly into existing scenarios.

Having no scenario planning in your business plan… now that seems like a very bad plan.


Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2. Research methodology (2nd ed., pp. 3-28). Las Vegas, NV: The Lentz Leadership Institute. (

Hall, E. (2010). Innovation out of turbulence: Scenario and survival plans that utilizes groups and the wisdom of crowds. In C. A. Lentz (Ed.), The refractive thinker: Vol. 5. Strategy in innovation (5th ed., pp. 1-30). Las Vegas, NV: The Lentz Leadership Institute. (

Prelec, D., Seung, H. S., & McCoy, J. (2017, January 26). A solution to the single-question crowd wisdom problem. Nature. 541(7638), 532-535. 10.1038/nature21054 Retrieved from:

Using Delphi Method for planning… including Scenario Planning and Horizon Plans.

This site discusses and tracks the use of Delphi-type methods in doing all kinds of research: academic, theoretical and real-world. Businesses can use the Delphi method to identify key issues, develop scenario plans and/or do horizon planning.

Strategic Business Planning that use similar methods as those used by Delphi Method. A strategic planning workshop for strat plan development uses a modified SWOT planning situational analysis method, for example. But the Delphi Method works best for horizon planning, future new product planning and scenario planning. We like to integrate disaster recovery planning (business continuity planning) into the scenario planning process.

Strategic Planning company (Hall, Hinkelman and associates) have done research and publishing on scenario planning and Delphi Method research.

Find these articles/books at:

  • SBPs Storefront at LuLu Press: (Chapter 8 of the Guide 2.0 as well as the Economic Development Plan.)
  • Refractive Thinker(r) (Look for articles/chapters on Delphi research including the Delphi Primer.)


We like to look for that future deflection point were it would be clear to everyone, including the dog, “Toto, I’ve a feeling we’re not in Kansas anymore.”

Powered by WordPress & Theme by Anders Norén