Scenario Plans (& Delphi Research)

I looked into the future, and the time to act is now.

Category: Scenario Planning & Delphi Research (Page 1 of 2)

The Future of Computers and Quantum Computing Part Duex

On April 4, 2019 the DC chapter of the IEEE Computer Society Chapter on Quantum Computing (co-sponsored by Nanotechnology Council Chapter) met to see a presentation by and IBM researcher named Dr. Elena Yndurain on the subject of recent efforts by that company in the realm of quantum computing. I was fortunate enough to be able to attend. I was hoping the presentation would be technical enough to be able to better understand the basics of quantum computing in the sense of a future time-line of when this new technology would be ready for the market place as defined during the course of my own research (Jordan, 2010) which is to say that a working prototype would be ready for full-scale testing. I was disappointed.

During the set-up for the real purpose of the talk, the presenter stated that the phases of quantum computing could be thought of as being in three phases of increasing complexity: (a) quantum annealing; (b) quantum simulation; and, (c) universal quantum computing. Ultimately, the goal would be (c). But the current state of the technology is (a).

It was also stated that there were essentially three possible technologies for quantum computing: (a) super conducting loops; (b) trapped ions; and, (c) topological braiding. Both (a) and (c) require cryogenic cooling. The IBM device uses technology (a) that is cooled down to 15 miliK0 (whew!). Technology (b) involves capturing ions in an optical trap using lasers. This technology operates at room temperature but suffers from a signal-to-noise problem that (a) does not. Technology (c) was not discussed.

The IBM device is a 50-qubit machine. The basic functionality of the device is predicated on Shor’s algorithm (Shor’s algorithm, 2019) and Grover’s search algorithm (Grover’s algorithm, 2019). These mathematical algorithms were developed during the 1990s. They are complex functions so there is a real part and an imaginary part. When queried the presenter stated the gains achieved by this so-called quantum annealing device were from the simplicity of the computation not the speed of the processor. The presenter went on to say that the basic algorithms had been coded in Python (Python (programming language), 2019).

That the IBM device is based on a 50-qubit processor struck me as being a bit coincidental. Recall from my first post on this subject, there has been an effort (by some unidentified group) to develop a fault-tolerant 50-qubit device since 2000. As of the publication of the paper this had not been achieved (Dyakonov, 2019). When I asked about this, the presenter simply stated that the IBM device was fault-tolerant but declined to offer any specific statistically based response. It should be stated that, during the presentation, Dr. Yndurain remarked that information included was cherry-picked [my words, not hers] to put things in the best light. Why?

During the presentation, what became clear is that IBM is building an ecosystem around the 50-qubit device. They have rolled this thing about as the “Q” computer. In order to gain access to the device, researcher must “subscribe” to the IBM service or simply “get in the que”. One also has to go through a training/vetting process to be able to develop the particular program the researcher needs to solve a particular problem. Seriously?

It seems to me this leaves two fundamental questions on the table: (a) will quantum computing be the next great disruptive innovation that supplants silicone dioxide (Schneider, The U.S. National Academies reports on the prospects for quantum computing, 2018) (Schneider & Hassler, When will quantum computing have real commercial value? Nobody really knows, 2019) (Simonite, 2016); (b) What was the point of the presentation?

My answer to the first question is that I remain skeptical. When queried, the presenter said that the materials used were proprietary and would not be available for use by the audience. I will also say that there was a notable lack of specific information in the presentation materials that could be verified. This suggests the answer to the second question: the point of the presentation was a sales pitch. IBM seems to be building an ecosystem around this 50-qubit device that will solidify market share for what was admittedly the very earliest stage of quantum computing. IBM seems to be continuing in the tradition of Moore’s law being a social imperative not a physics-based phenomenon.

References

Dyakonov, M. (2019, March). The case against quantum computing. IEEE Specturm, pp. 24-29.

Grover’s algorithm. (2019, April 5). Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Grover%27s_algorithm

Jordan, E. A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. Doctoral Dissertation. AZ: University of Pheonix.

Python (programming language). (2019, April 7). Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Python_(programming_language)

Schneider, D. (2018, Dec 5). The U.S. National Academies reports on the prospects for quantum computing. Retrieved from IEEE Spectrum: https://spectrum.ieee.org/tech-talk/computing/hardware/the-us-national-academies-reports-on-the-prospects-for-quantum-computing

Schneider, D., & Hassler, S. (2019, Feb 20). When will quantum computing have real commercial value? Nobody really knows. Retrieved from IEEE Spectrum: https://spectrum.ieee.org/computing/hardware/when-will-quantum-computing-have-real-commercial-value

Shor’s algorithm. (2019, April 7). Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Shor%27s_algorithm

Simonite, T. (2016, May 13). Morre’s law is dead. Now what? Retrieved from MIT Technology Review: https://technologyreview.com

The Future of Computers and Quantum Computing

Do you know what Gordon Moore actually said? In 1965 Gordon Moore observed that if you graphed in the increase of transistors on a planar semiconductor device using semi-log paper, it would describe a straight line. This observation ultimately became known as Moore’s law. The “l” is lower case in the academic literature because the law is not some grand organizing principle that explained a series of facts. Rather it was simply an observation. Moore adjusted the pronouncement in 1975 to set the vertical scale at every two years (Simonite, 2016). This so-called law has been the social imperative that has fueled innovation in the semiconductor manufacturing industry for well over 50 years. But it was a social imperative only (Jordan, 2010). It was clear from the beginning that the physics of the material would eventually get in the way of the imperative.

There is a physical limit to how far you can shrink the size of the individual devices using silicon dioxide, the underlying material of which all our electronics is made. That limit appears to be about 10 nanometers (Jordan, 2010; Simonite, 2016). There are also other more practical reasons why this limit may be unachivable such as heat disapation (Jordan, 2010). Although, given the cell phone industry seems to be driving the technology of late, significant strides have been made in reducing power consumption of these devices. This lower power consumption implies less heat generation. It also seems to imply getting away from a purely Van Neuman computational architecture toward a more parallel approach to code execution.

This brings us to the fundamental question: what technology is next? When will that technology emerge into the market place? My own research into these questions resulted in some rather interesting answers. One of the more surprising responses was the consensus about what was meant by emerging into the market place. The consensus of the Delphi panel I used in my research was when there was a full scale prototype ready for rigorous testing (Jordan, 2010). One of the most surprising answers addressed the consensus about what the technology would be that replaces silicon dioxide. My research suggests the replacement technology would be biologic in nature, RNA perhaps? The research also suggests this new technology would certainly emerge within the upcoming 30 years (Jordan, 2010). Given the research was conducted nine years ago, this suggests the new technology should be ready for full-scale prototype testing in about 20 years from now. I will address why this time frame is of significance shortly.

It turns out that this question of using RNA as a computational technology is being actively investigated. It would be difficult to predict to what extent this technology may mature over the next 20 years. But, in its current state of development, the computational speed is measured on the scale of minutes (Berube, 2019, March 7). Ignoring the problem of how one might plug a vat of RNA into a typical Standard Integrated Enclosure (SIE) aboard a US submarine, speeds on that scale are not particularly useful.

The Holy Grail of the next generation of these technologies is undoubtedly quantum computing (Dyakonov, 2019). There seems to be a lot of energy behind trying to develop this new technology with a reported “…laboratories are spending billions of dollars a year developing quantum computers.” (Dyakonov, 2019, p. 26). But we are left with the same question of when? Dyakonov divides projections into optimistic and “More cautious experts’ prediction” (p. 27). The optimists are saying between five and 10 years. The so-called more cautious prediction is between 20 and 30 years. This more cautious realm fit with my research as well (Jordan, 2010).

The real problem with achieving a working quantum computer is the shear magnitude of the technical challenges that must be overcome. In a conventional computer, it is the number of states of the underlying transistors that determine the computational ability of the machine. In this case a machine with N transistors will have 2N possible states. In the quantum computer, the device is typically the electron that will have a spin of up or down.  The probability of a particular electron spin being in a particular state varies continuously where the sum of the probability of up and the probability of down equaling 1. The typical term used to describe a quantum device used in this way is the “quantum gates” (Dyakonov, 2019, p. 27) or qubits. How many qubits would it take to make a useful quantum computer? The answer is somewhere between 1,000 and 100,000 (Dyakonov, 2019). This implies that to be able to make useful computations a quantum machine would have to something on the order of 10300 qubits. To illustrate how big a number that is I quote: “it is much, much greater than the number of sub-atomic particles in the observable universe.” (Dyakonov, 2019, p. 27). The problem is that of errors. How would one go about observing 10300 devices and correcting for errors? There was an attempt in the very early years of this century to develop a fault-tolerant quantum machine that used 50 qubits. That attempt has been unsuccessful as of 2019.

The basic research being done is of considerable value and much is being learned. Will we ever see a full-scale prototype ready for rigorous testing? I am beginning to doubt it. I am of the opinion that a usable quantum computer is not unlike controlled fusion: the ultimate solution, but always about 10 years out. So next year, our quantum computer (and controlled fusion for that matter) will not be nine years out but still another 10 years.

 

References

Dyakonov, M. (2019, March). The case against quantum computing. IEEE Specturm, pp. 24-29.

Jordan, E. A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. Doctoral Dissertation. AZ: University of Pheonix.

Simonite, T. (2016, May 13). Morre’s law is dead. Now what? Retrieved from MIT Technology Review: https://technologyreview.com

 

 

More prisoners in US than any other country: Criminal (In)Justice Scenarios

Here are Scenarios and sources of the injustice in the Criminal Justice system in the USA.

The US has the most people incarcerated of any country in the world… Even though we only have 4.3% of the world’s population, we have more inmates — 2.2 million — than China (1.5m) and India (0.3m), combined (36.4% of world population)! We have 23% of China’s population but 40% more incarcerated. We have almost 1% of our population (0.737%) incarcerated! We have 6 times higher incarceration rate than China, 12 times higher that Japan, and 24 times the rates in India and Nigeria. That’s right, an American has a 1,200% greater chance of being incarcerated than a Japanese citizen. We have even a 20% higher incarceration rate than Russia with 0.615% of their population in (Siberian) prisons and jails.

I know what you’re thinking, Americans must be more criminally inclined than any other country in the universe. And, no, it is not those *bleeping* Mexicans. The evidence shows that the Mexicans (legal or otherwise) cause less crimes than the typical “American”, plus crimes involving illegal Mexicans are far more likely to go unreported.

So now, I’m at a loss. Where did the criminal genes come from? You can’t really blame the American Indians.

Some of the ugly mechanisms and profits in the prison system are summarized nicely here in ATTN by Ashley Nicole Black, Who Profits from Prisons (Feb, 2015).

“There are currently [2.2 million] American in prisons. This number has grown by 500 percent in the past 30 years. While the United States has only [4.3] percent of the world’s population, it holds 25 percent of the world’s total prisoners. In 2012, one in every 108 adults was in prison or in jail, and one in 28 children in the U.S. had a parent behind bars.”

For years I heard stats that half of the people in prison in the USA were for non-violent (no weapon) drug offenses. That’s insane. It seems like the wrong people are institutionalized here. With the legalization of marijuana in many states these incarceration rates should be reducing (improving). July 2018 shows 46% of US inmates are for drug offenses: https://www.bop.gov/about/statistics/statistics_inmate_offenses.jsp

Okay, so what does that have to do with scenarios and scenario planning? What would be some of the scenarios that might lead to something more sane in terms of our incarceration rates. One approach would be to focus on those deflection points that might result in a lower level of criminals (criminal activity). Just one would be a new approach related to the prohibition of marijuana. As we learned from alcohol, prohibition doesn’t work. But there are several other ways to provide a mechanism for less criminal activity and/or less people incarcerated and/or less people incarcerated for so long. We’ll talk about two of our favorites at a later time: education and community engagement/involvement. (The Broken Window concept of fixing up the community and more local engagement is very intriguing. See article by Eric Klinenberg here.)

The big thing that escalated US incarceration rates was a get-tough-on-crime movement that began during the Nixon “I’m-not-a-crook” era. Part of this was obviously to have some tools to go after the hippies and the protesters. Tough on crime with mandatory sentences, lots of drug laws, and 3-strike laws came into being. Not to be outdone, as the toughest on crime, the 3-strikes moved to 2-strikes to, essentially 1-stike. As we filled up the prisons, we had to build more.

One current trend that should increase incarceration is the current epidemic of opioid-ish drug overdoses. Most forces, however, seem to be pushing toward reductions in incarceration.

Various scenarios should lead to a significant reduction in incarceration rates. The resulting scenario of low incarceration should have several ramifications. If you are in the business of incarceration, then business should – ideally – get worse and worse. Geo and Corrections Corp of America (now CoreCivic) should expect their business to drop off precipitously. Plus, there seem to be several movements away from private (or publicly traded) companies back toward government run prisons because private has been shown to be less effective — even if cheaper on the inmate-year bases.

Here’s a discussion of the business of incarceration. Note that the “costs” of incarceration are far, far more than the $50,000+/- it costs per year per inmate. Plus, having more people as productive members of society has them working (income and GDP) and paying taxes, not a dead weight on society.

Do you think that the relaxation of marijuana laws might be a “Sign Post” (in scenario terms) that indicates a rapid drop in prison population? Also, super full employment, might be a solution all by itself. People, especially kids, who can get jobs and do something more productive, may be less inclined to get into drugs and mischief? There’s no reason why the Sign Post need to be only one, or even two signs. In fact, the crime system is just a sub-system of an economy. Multiple reinforcing systems can be really powerful.

If we do take other approaches to the incarceration system, what would those approaches be? And who (what businesses/industries) would benefit most?

What do you think? Is it time to get out of the criminal (in)justice system?

Resources

Half of the world’s incarcerated are in the US, China and Russia: http://news.bbc.co.uk/2/shared/spl/hi/uk/06/prisons/html/nn2page1.stm

Incarceration Rates: https://www.prisonpolicy.org/global/2018.html

US Against the world: https://www.statista.com/statistics/300986/incarceration-rates-in-oecd-countries/

New Yorker Article in Sept 2016 by Eric Markowitz, Making Profits on the Captive Prison Market.

How for-profit prisons have become the biggest lobby no one is talking about, by Michael Cohen in 2015.

Follow the money, in 2017, with a great infographic as to where all the prison moneys go.

Salt and Battery, When does Storage make Fossil Fuel Obsolete

Last week the world’s biggest Electric Vehicle (EV) battery company made a big opening splash on its IPO. CATL is a Chinese company that IPOed with a massive 44% pop on open. The company offered up only 10% of the shares in the IPO, valuing the company at more than $12B. China has limits on how much a company can IPO at (price based on PE ratio) and a 44% limit on the amount an IPO can rise in first day of trading. Expect this company to jump continually for some time. CATL is now the largest EV battery company in the world, primarily with lithium-ion for autos.

Of course, you can just use power as needed, when needed. With the rapid increase in efficiencies of wind (where the wind blows) and solar (where the sun shines) this is becoming ever-more critical. Once the infrastructure of transmission lines are in place, the renewable power plants are far more cost effective than any other options. Both wind and solar are now less than $.02 per KW, and the combined wind-solar is coming in at less than $.03. Such new power can come onboard in months, not years or decades required for other types of power.

Still, the problem is smoothing out the power for night time when the wind is not blowing. Thus the reliance on storage if we are to move to total renewables. If – well, when – the combined renewable energy and storage costs are lower than coal, oil and natgas, there will be no need for fossil fuels, except maybe for those places where the sun doesn’t shine (much) and the wind doesn’t blow (much).

There are many different options for storage of energy.

Fixed storage can be in the form of solar that moves water (back upstream to a dam that is above the existing hydro power system). It can use mirrors to focus heat for molten salt, for example.

The old lead battery technology has been tried and proved for a century and still is alive and well in the golf-carts.

Many players are after the battery storage market. GE is fighting hard against Tesla (powerwall battery built in their GigaFactories for fixed and battery packs for their cars) and Siemens. Storage options that are as good, or better, then lithium are coming fast to market for different applications. See a great view of new battery technologies in Pocket Lint. Batteries technologies that contain more carbon, nickel or cobalt seem very intriguing. Hydrogen options using fuel cell has been right at the edge of mass breakthrough into the market for decades.

When will certain storage options become a game-changer for existing “built economy” such as fossil fuels?

At some point, the combined renewable and storage will be sufficiently powerful and affordable to render the old fossil fuel options obsolete. McKinsey report discusses this massive drop in price and trend in their battery report. In 2010 battery storage cost about $1,000 per kilowatt hour of storage; their June 2017 report shows it at $230 per kwh in 2016 and dropping fast. It should be well below $200 per kwh now. (Batteries for the Telsa Model 3 are supposed to be at about $190 per kWh based on mass manufacturing; estimates based on SEC filings are for $157 kWh by 2020.)

So, what is the break-even point where storage becomes the game changer, and renewables with battery deflect the entire energy industry onto another course? Apparently, $125 per kWh is the disruptive price point. A scientist name Cadenza has developed battery technology at this price point using super cell and is now working on an extended version that includes the peripherals with the battery at, or below, the magical $125 kWh. She must demonstrate both cheaper and safer, so the housing is critical to avoid fires and short-circuits. “In March of this year, Cadenza published its report (pdf) saying that its super-cell technology can indeed hit that point.”

The technology is already here, yet new improvements are leap-frogging each competing option. How long before fossil fuels are an obsolete option? For just plain generation, fossils are dead and dying. Combined is where the war is won, however.

We argue that you really want to be careful with your oil and gas investments because you can find yourselves, like the oil patch (countries and companies and refiners) with stranded assets.

Moore’s law is at work in the battery complex. How long before combined renewables with storage supplants fossil fuels? Five years? Ten? Twenty?

Scenarios of Stranded Assets in the Oil Patch

The researchers over at Strategic Business Planning Company have been contemplating scenarios that lead to the demise of oil. The first part of the scenario is beyond obvious. Oil (and coal) are non-renewable resources; they are not sustainable; burning fossil fuels will stop — eventually. It might cease ungracefully, and here are a few driving forces that suggest the cessation of oil could come sooner, not later. Stated differently, if you owned land that is valued based on carbon deposits, or if you owned oil stocks those assets could start to become worth less (or even worthless).

We won’t spend time on the global warming scenario and possible ramifications of government regulation and/or corporate climate change efforts. These could/would accelerate the change to renewables. There are other drivers away from fossil fuels including: National Security, Moore’s Law toward renewables; and, efficiency.

1. National Security. Think about all the terrorist groups and rogue countries. All of them get part, or all of their funding from oil (and to a lesser extent, NatGas and Coal). Russia. Iran. Lebanon, where the Russians have been enjoying the trouble they perpetuate. The rogue factions in Nigeria. Venezuela. Even Saudi is not really are best friend (15 of the 19 bombers on 911 were Saudi citizens). Imagine if the world could get off of fossil fuels. Imagine all the money that would be saved, by not having to defend one countries aggression on another if the valuable oil became irrelevant. Imagine how much everyone would save on military. This is more than possible with the current technology; but with Moore’s law of continuous improvement, it becomes even more so.

2. Moore’s Law. Moore’s law became the law of the land during the computer chip world, where technology is doubling every 18 months, and costs are reducing by half.  (See our blog on The Future of Computing is Taking on a Life of Its Own. After all these decades Moore’s law is finally hitting a wall.) In the renewable world, the price of solar is dropping dramatically, when the efficiency continues to increase. For example the increase of 30% on imported PV, matches the cost reductions of the last year. In the meanwhile battery efficiency is improving dramatically, year-over-year. Entire solar farms have been bid (and built) for about $.02 per kilowatt and wind and/or solar with battery backup is about $.03 per kilowatt. At that price, it is far cheaper to install renewable power vs coal or NatGas, especially given the years to create/develop for fossil fuel plants.

Note, that we haven’t even talked about peak coal and peak oil. Those concepts are alive and well, just that fracking technology has pushed them back maybe 10 years from a production supply-side perspective. At some point you hit the maximum possible production (on a non-renewable resource) and production can only go down (and prices go up) from there. The world production of oil is now up to 100m barrels per day.  But oil wells deplete at about 4%-5%, so you need 4% more new wells every year. Fracking drops about 25%-30% in the first year! So you need about many more wells each year to stay even. But let’s go on to efficiency and probably the major demand-side force.

3. Efficiency. The incandescent light bulb, produces very little light… it produces more than 95% heat, and just a tiny bit of light with 100 watts of energy. With only 10-15 watts an LED light can produce the same light was required 100 watts in days of old. The internal combustion engine is hugely inefficient, producing mostly (unused) heat and directly harnessing only 10-15% of energy from gas or diesel… plus it took huge amounts of energy to mine, transport, refine, transport, and retail the fuel. Electric engines are far more efficient, and they produce no toxic emissions. A great book that talks about energy, efficiency and trends is by Ayers & Ayers, Crossing the Energy Divide. The monster power plants (nuclear, coal, NatGas) have serious efficiency issues. They produce huge amounts of heat for steam turbines, but most of the heat is lost/wasted (lets say 50%). Electricity must be transmitted long distances through transmission lines (where up to 40% can be lost in transmission).

Producing power as needed, where needed, makes so much more sense in most cases. Right now, using today’s technology, pretty much everyone can produce most of their own power (PV or wind) at about the same cost as the power monopolies.  But Moore’s law is making the renewable technology better and better every year. Add some batteries and microgrid technology and you have robust electric systems.

The losers in these trends/scenarios can be the BIG oil companies and the electric monopolies. They will fight move until they change, or they lose. Just like peak oil, it is a mater of time… but the time is coming faster and faster…

Saudi is trying to keep prices high enough to complete their oil Initial Public Offering so they can diversify out of oil. Venezuela is offering a new cyber coin IPO (their Petro ICO) with barrels of buried oil as collateral (See Initial Kleptocurrency Offering). But what if that oil becomes a stranded asset? Your Petro currency becomes as worthless as the Venezuelan Bolivar.

You really want to carefully consider how much and how long you want to own fossil fuel assets… Fossil fuels may be dead in a decade or two… Moore or less.

Triangulation to augment your Qual study

Triangulation in research is a lot the old technology of geometry and surveying where you take the distance from three known points to compute the exact location on a map… Give or take a few yards. LORAN technology using radio signals and such was used in WWII. With a LORAN in the gulf, I remember being able to find where we were on a sail boat, approximately. The problem was that we were in an area of the Gulf of Mexico with only two LORAN readings. Three, you can triangulate, two you can approximate.

Triangulation in Academic Research is the kind of stuff you can possibly do to augment your Qual study. As discussed other places, Delphi Studies might need to be recharacterized as Mixed method if some of the research is sufficiently quantitative, i.e., if second round has a lot of respondents and it makes sense to do stats, like correlation on several variables.

So, in any qual study, you might consider including triangulation. There are a few types of triangulation (depending on your source) but let’s focus on just two: data and lit/theoretical. Data would be if you could find published statistics in the area that would allow for some corroboration of the findings from the study. In terms of data, maybe some stats that give an estimate of the independent and/or dependent variables (predictor and predicted variables in QUAL world). Possibly even the intersection of the two. Does the available data align with the findings of the study?

Internal data to a study should be kept separate from external data triangulation. In Delphi studies, for example, there might be an alignment of the more general findings from round 1 and rankings of round 2. This offers up internal consistency.

One of the coolest, and potentially strongest, aspects of triangulation is literature (or theory) triangulation. Does the existing literature align with some of the key themes found in your QUAL study. Think of this as a meta-study lite. For a meta study, there needs to be a lot of research, and a deep dive into the existing research can allow for a table of results that support, don’t support, or disprove various themes.

Here is a very interesting approach for triangulation within a Delphi study (Hopf, Francis, Helms, Haughney, & Bond, 2016). Find the article here at BMJopen. For past studies that did not address a specific topic, they used a bazaar label of “Silence”, as in not addressed in the specific study. A better label would probably not addressed (n.a.). (The implication of silence is that the authors intentionally avoided that specific issue in their study.)

So, consider including one of the 4 or 5 types of triangulation in your qual study to strengthen the support for your findings (or to highlight divergent findings). For the regular researcher (say dissertation), consider simply doing meta analysis, and avoid all that messy questionnaire stuff, if the field is full of existing research.

If you use Delphi, you will be able to project into the future. You can explore how some of the themes identified in the research grow or wane in an uncertain future, and what conditions (triggers) might initiate major future disruption, i.e., scenario analysis.

References

Hopf, Y. M., Francis, J., Helms, P. J., Haughney, J., & Bond, C. (2016). Core requirements for successful data linkage: an example of a triangulation method. BMJ Open, 6(10), e011879. doi:10.1136/bmjopen-2016-011879 Retrieved from: http://bmjopen.bmj.com/content/6/10/e011879

 

 

Consensus: Let’s agree to look for agreement, not consensus

Most of the hunters (academic researchers) searching for consensus in their Delphi research, are new to the sport. They believe that they must bag really big game or come home empty handed. But we don’t agree. In fact, once you have had a chance to experience Delphi hunting once or twice, your perception of the game changes.

Consensus is a BIG dilemma within Delphi research. However, it is generally an unnecessary consumer of time and energy. The original Delphi Technique used by the RAND Corporation wanted to aim for consensus in many cases. That is, the U.S. government could either enter an nuclear arms race or not; there really was no middle ground.  Consequently, it was counterproductive to build a technique that could not reach consensus.  It became binary: reach consensus and a plan could be recommended to the president; no consensus, and this too was useful, but less helpful, to inform the president. (The knowledge that the experts could not come up with a clear path forward, when exerting a structured assessment process, is also very good to know.)

Consensus. The consensus process – getting teams of experts to think through complex problems and come up with the best solutions – is critical to effective teamwork and to the Delphi process. In most cases, however, it is not necessary – or even desirable – to come up with the one and only best solution. So long as there is no confusion about the facts and the issues, forcing a consensus when there is none is counter-productive (Fink, Kosecoff, Chassin & Brook, 1984; Hall, 2009, pp. 20-21).

Table 1 shows the general characteristics of various types of nominal group study techniques (Hall & Jordan, 2013, p. 106). Note that the so called traditional Delphi Technique and the UCLA-RAND appropriateness approaches aim for consensus. The so call Modified Delphi might not search for consensus and might not utilize experts. Researchers use the UCLA-RAND approach extensively to look for the best medial treatment protocol when only limited data is available, relying heavily on the expertise of the doctors involved to suggest – sometimes based on their best and informed guess – what protocol might work best. The doctors can only recommend one protocol. Consensus is needed here. 

(Table reprinted with permission Hall and Jordan (2013), p. 106).

But consensus is rarely needed, although it is usually found, to some degree, in business research, and even in most academic research. For example, the most important factors may be best business practices. Of the total list of 10 to 30 factors, few are MOST important. Often, the second round of Delphi aims to prioritize those qualitative factors identified in round 1. There factors are usually natural separation points between the most important (e.g. 4.5 out of 5), those that are medium important (3 out of 5), and the low importance factors.

Those researchers who are fixated on consensus might spend time, maybe a lot of time, trying to find that often elusive component called consensus. There are usually varying levels of agreement. Five doctors might agree on one single best protocol, but 10 probably won’t, unanimously. Interestingly, as the number of participants increase, the ability to talk statistically significantly about the results increases; however, the likelihood of pure, 100% consensus, diminishes. For example, a very small study of five doctors reaches unanimous consensus; but when it is repeated with 30 doctors, there is only 87% agreement. Obviously, one would prefer the quantitative and statistically significant results from the second study. (Usually you are forecasting with Delphi; 100% agreement implies a degree of certainty in an uncertain future, essentially this can easily result in a misapplication of a very useful planning/research tool.)

This brings us to qualitative Delphi vs. a more quantitative, mixed-method, Delphi. Usually Delphi is considered QUAL for several reasons. It works with a small number of informed, or expert, panelists. It usually gathers qualitative information in round 1. However, the qualitative responses are prioritized and/or ranked and/or correlated in round 2, round 3, etc. If a larger sample of participants results in 30 or more respondents in round 2, then the study probably should be upgraded from a purely qualitative study to mixed-method. That is, if the level of quantitative information gathered in round 2 is sufficient, statistical analysis can be meaningfully applied. Then you would look for statistical results (central tendency, dispersion, and maybe even correlation). You will find a confidence interval for all of your factors, those that are very important (say 8 or higher out of 10, +/- 1.5) and those that aren’t important. In this way, you could find those factors that are both important and statistically more important than other factors: a great time to declare a “consensus” victory.

TIP: Consider using more detailed scales. As 5-point Likert-type scale will not provide the same statistical detail as a 7-point, 10-point or maybe even a ratio 100% scale if it makes sense.

Subsequently, in the big game hunt for consensus, most hunters continue to look for the long-extinct woolly mammoth. Maybe they should “modify” their Delphi game for an easier search for success instead . . .

What do you think?

References

Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2: Research Methodology, (pp. 3-27). Las Vegas, NV: The Refractive Thinker® Press. Retrieved from: http://www.RefractiveThinker.com/

Hall, E. B., & Jordan, E. A. (2013). Strategic and scenario planning using Delphi: Long-term and rapid planning utilizing the genius of crowds. In C. A. Lentz (Ed.), The refractive thinker: Vol. II. Research methodology (3rd ed.). (pp. 103-123) Las Vegas, NV: The Refractive Thinker® Press.

Scenarios Now and the Genius (hidden) within Crowd

It’s been about 10 years since the Great Recession of 2007-2008. (It formally started in December of 2007.) A 2009 McKinsey study showed that CEOs wished that they had done more scenario planning that would have made them more flexible and resilient through the great recession. In a 2011 article, Hall (2011) discusses the genius of crowds and group planning – especially scenario planning.

The Hall article spent a lot of time assessing group collaboration, especially utilizing the power available via the Internet. Wikipedia is one of the greatest collaboration – and most successful – tools of all time. It is a non-profit that invokes millions of volunteers daily to add content and regulate the quality of the facts. In this day of faus news, Wikipedia is a stable island in the turbulent ocean of content. Anyone who has corrections to make to any page (called article) is encouraged to do so. However, the corrections need to fact-based and source rich. Unlike a typical wiki, where anything goes, the quality of content is very tightly controlled.  As new information and research comes out on a topic, Wikipedia articles usually reflect those changes quickly and accurately. Bogus information usually doesn’t make it in, and bias writing is usually flagged. Sources are requested when an unsubstantiated fact is presented.

Okay, that’s one of the best ways to use crowds. People with an active interest – and maybe even a high level of expertise – update the content. But what happens when the crowd is a group of laypeople. Jay Leno made an entire career from the “wisdom” of people on the street when he was out Jay Walking. The lack of general knowledge in many areas is staggering.  Info about the latest scandal or gossip by celebs, on the other hand, might be really well circulated. So how can you gather information from a crowd of people where the crowd may be generally wrong?

It turns out that researchers at MIT and Princeton have figured out how to use statistics to figure out when the crowd is right and when the informed minority is much more accurate (Prelec, Seung & McCoy, 2017).  (See a Daniel Akst overview WSJ article here.) Let’s say you are asking a lot of people a question in which the general crowd is misinformed. The answer, on average, will be wrong. There might be a select few in the crowd who really do know the answer, but their voices are downed out, statistically speaking. These researchers took a very clever approach; they ask a follow-on question about what everyone else will answer. The people who really know will often have a very accurate idea of how wrong the crowd will be. So the questions with big disparities can be identified and you can give credit to the informed few while ignoring the loud noise from the crowd.

Very cool. That’s how you can squeeze out knowledge and wisdom from a noisy crowd of less-than-informed people.

The question begs to be asked, however: Why not simply ask the respondents how certain they are? Or, maybe, ask the people of Pennsylvania what their state capital is, not the other 49 states who will generally get it wrong. Maybe even put some money on it to add a little incentive for true positives combined with costly incorrect answers such that only the crazy or the informed will “bet the farm” on answers where they are not absolutely positive?

But then, that too is another study.

Now, to return to scenario planning. Usually with scenario planning, you would have people that are already well informed. However, broad problems have different silos of expertise. Maybe a degree of comfort or confidence would be possible in the process of scenario creation. Areas where a specific participant feels more confident might get more weight than other areas where their confidence is lower. Hmm… Sounds like something that could be done very well with Delphi, provided there were well informed people to poll.

Note scenarios are different from probabilities… Often scenarios are not high probabilities… You are usually looking at possible scenarios that are viable… The “base case” scenario is what goes into the business plan so that may be the 50% scenario; but all the other scenarios are everything else. The base case is only really likely to occur if nothing major changes in the macro and the micro economic world. Changes always happen, but the question is, does the change “signal” that the bus has left the freeway, and now new scenario(s) are at play.

The average recession occurs every 7 years into a recovery. We are about 10 years into recovery from the Great Recession. Of course, many of the Trump factors could be massively disrupting. Not to name them all, but on the most positive case, a 4% to 5% economic growth in the USA, should be a scenario that every business should be considering. (A strengthening US and world economy may, or may not, be directly caused by Trump.) The nice thing about having sound scenario planning, as new “triggers” arise, they may (should) lead directly into existing scenarios.

Having no scenario planning in your business plan… now that seems like a very bad plan.

Reference

Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2. Research methodology (2nd ed., pp. 3-28). Las Vegas, NV: The Lentz Leadership Institute. (www.RefractiveThinker.com)

Hall, E. (2010). Innovation out of turbulence: Scenario and survival plans that utilizes groups and the wisdom of crowds. In C. A. Lentz (Ed.), The refractive thinker: Vol. 5. Strategy in innovation (5th ed., pp. 1-30). Las Vegas, NV: The Lentz Leadership Institute. (www.RefractiveThinker.com)

Prelec, D., Seung, H. S., & McCoy, J. (2017, January 26). A solution to the single-question crowd wisdom problem. Nature. 541(7638), 532-535. 10.1038/nature21054 Retrieved from: http://www.nature.com/nature/journal/v541/n7638/full/nature21054.html

Outa Time, the tic-toc of Intel and modern computing.

Ed Jordan’s dissertation research looked at the future of computing. He was inspired by the thought that Gordon’s law (Moore’s law) of computing — 18 months to double speed (and halve price) — was about to break down because of the limitations of silicon chips as the go below the 14 manometer level. Since Intel lives and dies based on the silicon chip, his research was really a story into the future. When will the old chip die, and what will be the next technology?

Hall & Jordon discuss the application of this disruptive technology in their DoD procurement planning article in the Refractive Thinker related to the use of Integrated Product Teams.

His research showed that the death of the silicon chip computer would come sooner, not later. And that several options appeared likely including quantum computing.  Scientists have just made a huge breakthrough toward Quantum Computing: see the WSJ article about it here, as published in the journal Nature.

In the meantime, Intel’s approach for decades of hardware one year and software (for the new hardware) the next has broken down. The so-called Tic-Toc of Intel is now outa time. It seems to be more like 2 years (4 years, really) in the clock cycle.

So, will Intel die with the new technologies? Obviously Intel can simply invent the disruptive technologies internally, or buy it up wherever the viable invention wells up.

References

Debnath, S., Linke, N. M., Figgatt, C., Landsman, K. A., Wright, K., & Monroe, C. (2016). Demonstration of a small programmable quantum computer with atomic qubits. Nature, 536(7614), 63–66. doi:10.1038/nature18648

Jordan, Edgar A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. (Doctoral Dissertation). Available from ProQuest dissertations and Theses database. (UMI No. 3442759)

Jordan, E. A., & Hall, E. B. (2016). Group decision making and Integrated Product Teams: An alternative approach using Delphi.  In C. A. Lentz (Ed.), The refractive thinker: Vol. 10. Effective business strategies for the defense sector. (pp. 1-20) Las Vegas, NV: The Refractive Thinker® Press. ISBN #: 978-0-9840054-5-1. Retrieved from: http://refractivethinker.com/chapters/rt-vol-x-ch-1-defense-sector-procurement-planning-a-delphi-augmented-approach-to-group-decision-making/

The Conundrum of middle management, HR experts and Delphi research.

Here is the overview on the RefractiveThinker™ article by Lentz in 2009 that discussed some of her findings related to using HR experts in a single-round, quantitative, Delphi Study. See the prior blog discussion related to using a 1-round Quant Delphi method here.

The overview of the 2009 chapter by Lentz, The modified ask-the-experts Delphi method: The conundrum of human resource experts on management participation, is this:

“[The] Lentz Dissertation study … was … a quantitative correlational explanatory method, using a modified Ask-the-Experts Delphi technique to determine if the traditionally held view of the strategic management process where strategic decision making had once been entrusted solely to the organization’s top management was still valid. Historically, only those in senior leadership positions within the executive office were felt to understand and employ strategic literacy in order to possess the skill, knowledge, and expertise to most effectively formulate corporate strategy and make strategic decisions. The purpose of the present study was to extend the foundational work of Wooldridge and Floyd from their 1990 study, using the modified Delphi Technique to look at the significance of additional employee involvement in the strategic decision-making process as it correlates to organizational performance.”

Based on the works in the 1990 by Wooldridge and Floyd, this dissertation was able to skip over round 1 of a typical Delphi Study. She hoped that the HR experts would corroborate the findings of the “Floyd Boyz”, as she called them. Assuming that the prior research was corroborated, then she would feel comfortable extending the research further and obtain better understanding of the involvement of middle management in the strategic planning world.

But, she didn’t get that first round of confirmation in the statistical analysis she was expecting!? Maybe things have changed since 1990? That seems likely. Maybe the HR experts weren’t so expert after all? Hmmm…  Maybe Delphi doesn’t always do what it hopes to do? Hmmm….

Sounds like a conundrum?

In the meanwhile, it seems that middle management gets no respect like the late-great Rodney Dangerfield of strategic planning and decision making.

References

Lentz, C. A. (2007).  Strategic decision making in organizational performance: A quantitative study of employee inclusiveness. D.M. dissertation, University of Phoenix, Arizona. Dissertations & Theses @ University of Phoenix database. (Publication No. AAT 3277192).

Lentz, C. (2009). The modified ask-the-experts Delphi method: The conundrum of human resource experts on management participation. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2: Research Methodology, (pp. 51-75). Las Vegas, NV: The Refractive Thinker® Press. Retrieved from: http://refractivethinker.com/rt-vol-ii/

Page 1 of 2

Powered by WordPress & Theme by Anders Norén