MountainScenarios

The Future of Computers and Quantum Computing

Do you know what Gordon Moore actually said? In 1965 Gordon Moore observed that if you graphed in the increase of transistors on a planar semiconductor device using semi-log paper, it would describe a straight line. This observation ultimately became known as Moore’s law. The “l” is lower case in the academic literature because the law is not some grand organizing principle that explained a series of facts. Rather it was simply an observation. Moore adjusted the pronouncement in 1975 to set the vertical scale at every two years (Simonite, 2016). This so-called law has been the social imperative that has fueled innovation in the semiconductor manufacturing industry for well over 50 years. But it was a social imperative only (Jordan, 2010). It was clear from the beginning that the physics of the material would eventually get in the way of the imperative.

There is a physical limit to how far you can shrink the size of the individual devices using silicon dioxide, the underlying material of which all our electronics is made. That limit appears to be about 10 nanometers (Jordan, 2010; Simonite, 2016). There are also other more practical reasons why this limit may be unachivable such as heat disapation (Jordan, 2010). Although, given the cell phone industry seems to be driving the technology of late, significant strides have been made in reducing power consumption of these devices. This lower power consumption implies less heat generation. It also seems to imply getting away from a purely Van Neuman computational architecture toward a more parallel approach to code execution.

This brings us to the fundamental question: what technology is next? When will that technology emerge into the market place? My own research into these questions resulted in some rather interesting answers. One of the more surprising responses was the consensus about what was meant by emerging into the market place. The consensus of the Delphi panel I used in my research was when there was a full scale prototype ready for rigorous testing (Jordan, 2010). One of the most surprising answers addressed the consensus about what the technology would be that replaces silicon dioxide. My research suggests the replacement technology would be biologic in nature, RNA perhaps? The research also suggests this new technology would certainly emerge within the upcoming 30 years (Jordan, 2010). Given the research was conducted nine years ago, this suggests the new technology should be ready for full-scale prototype testing in about 20 years from now. I will address why this time frame is of significance shortly.

It turns out that this question of using RNA as a computational technology is being actively investigated. It would be difficult to predict to what extent this technology may mature over the next 20 years. But, in its current state of development, the computational speed is measured on the scale of minutes (Berube, 2019, March 7). Ignoring the problem of how one might plug a vat of RNA into a typical Standard Integrated Enclosure (SIE) aboard a US submarine, speeds on that scale are not particularly useful.

The Holy Grail of the next generation of these technologies is undoubtedly quantum computing (Dyakonov, 2019). There seems to be a lot of energy behind trying to develop this new technology with a reported “…laboratories are spending billions of dollars a year developing quantum computers.” (Dyakonov, 2019, p. 26). But we are left with the same question of when? Dyakonov divides projections into optimistic and “More cautious experts’ prediction” (p. 27). The optimists are saying between five and 10 years. The so-called more cautious prediction is between 20 and 30 years. This more cautious realm fit with my research as well (Jordan, 2010).

The real problem with achieving a working quantum computer is the shear magnitude of the technical challenges that must be overcome. In a conventional computer, it is the number of states of the underlying transistors that determine the computational ability of the machine. In this case a machine with N transistors will have 2N possible states. In the quantum computer, the device is typically the electron that will have a spin of up or down.  The probability of a particular electron spin being in a particular state varies continuously where the sum of the probability of up and the probability of down equaling 1. The typical term used to describe a quantum device used in this way is the “quantum gates” (Dyakonov, 2019, p. 27) or qubits. How many qubits would it take to make a useful quantum computer? The answer is somewhere between 1,000 and 100,000 (Dyakonov, 2019). This implies that to be able to make useful computations a quantum machine would have to something on the order of 10300 qubits. To illustrate how big a number that is I quote: “it is much, much greater than the number of sub-atomic particles in the observable universe.” (Dyakonov, 2019, p. 27). The problem is that of errors. How would one go about observing 10300 devices and correcting for errors? There was an attempt in the very early years of this century to develop a fault-tolerant quantum machine that used 50 qubits. That attempt has been unsuccessful as of 2019.

The basic research being done is of considerable value and much is being learned. Will we ever see a full-scale prototype ready for rigorous testing? I am beginning to doubt it. I am of the opinion that a usable quantum computer is not unlike controlled fusion: the ultimate solution, but always about 10 years out. So next year, our quantum computer (and controlled fusion for that matter) will not be nine years out but still another 10 years.

 

References

Dyakonov, M. (2019, March). The case against quantum computing. IEEE Specturm, pp. 24-29.

Jordan, E. A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. Doctoral Dissertation. AZ: University of Pheonix.

Simonite, T. (2016, May 13). Morre’s law is dead. Now what? Retrieved from MIT Technology Review: https://technologyreview.com

 

 

More prisoners in US than any other country: Criminal (In)Justice Scenarios

Here are Scenarios and sources of the injustice in the Criminal Justice system in the USA.

The US has the most people incarcerated of any country in the world… Even though we only have 4.3% of the world’s population, we have more inmates — 2.2 million — than China (1.5m) and India (0.3m), combined (36.4% of world population)! We have 23% of China’s population but 40% more incarcerated. We have almost 1% of our population (0.737%) incarcerated! We have 6 times higher incarceration rate than China, 12 times higher that Japan, and 24 times the rates in India and Nigeria. That’s right, an American has a 1,200% greater chance of being incarcerated than a Japanese citizen. We have even a 20% higher incarceration rate than Russia with 0.615% of their population in (Siberian) prisons and jails.

I know what you’re thinking, Americans must be more criminally inclined than any other country in the universe. And, no, it is not those *bleeping* Mexicans. The evidence shows that the Mexicans (legal or otherwise) cause less crimes than the typical “American”, plus crimes involving illegal Mexicans are far more likely to go unreported.

So now, I’m at a loss. Where did the criminal genes come from? You can’t really blame the American Indians.

Some of the ugly mechanisms and profits in the prison system are summarized nicely here in ATTN by Ashley Nicole Black, Who Profits from Prisons (Feb, 2015).

“There are currently [2.2 million] American in prisons. This number has grown by 500 percent in the past 30 years. While the United States has only [4.3] percent of the world’s population, it holds 25 percent of the world’s total prisoners. In 2012, one in every 108 adults was in prison or in jail, and one in 28 children in the U.S. had a parent behind bars.”

For years I heard stats that half of the people in prison in the USA were for non-violent (no weapon) drug offenses. That’s insane. It seems like the wrong people are institutionalized here. With the legalization of marijuana in many states these incarceration rates should be reducing (improving). July 2018 shows 46% of US inmates are for drug offenses: https://www.bop.gov/about/statistics/statistics_inmate_offenses.jsp

Okay, so what does that have to do with scenarios and scenario planning? What would be some of the scenarios that might lead to something more sane in terms of our incarceration rates. One approach would be to focus on those deflection points that might result in a lower level of criminals (criminal activity). Just one would be a new approach related to the prohibition of marijuana. As we learned from alcohol, prohibition doesn’t work. But there are several other ways to provide a mechanism for less criminal activity and/or less people incarcerated and/or less people incarcerated for so long. We’ll talk about two of our favorites at a later time: education and community engagement/involvement. (The Broken Window concept of fixing up the community and more local engagement is very intriguing. See article by Eric Klinenberg here.)

The big thing that escalated US incarceration rates was a get-tough-on-crime movement that began during the Nixon “I’m-not-a-crook” era. Part of this was obviously to have some tools to go after the hippies and the protesters. Tough on crime with mandatory sentences, lots of drug laws, and 3-strike laws came into being. Not to be outdone, as the toughest on crime, the 3-strikes moved to 2-strikes to, essentially 1-stike. As we filled up the prisons, we had to build more.

One current trend that should increase incarceration is the current epidemic of opioid-ish drug overdoses. Most forces, however, seem to be pushing toward reductions in incarceration.

Various scenarios should lead to a significant reduction in incarceration rates. The resulting scenario of low incarceration should have several ramifications. If you are in the business of incarceration, then business should – ideally – get worse and worse. Geo and Corrections Corp of America (now CoreCivic) should expect their business to drop off precipitously. Plus, there seem to be several movements away from private (or publicly traded) companies back toward government run prisons because private has been shown to be less effective — even if cheaper on the inmate-year bases.

Here’s a discussion of the business of incarceration. Note that the “costs” of incarceration are far, far more than the $50,000+/- it costs per year per inmate. Plus, having more people as productive members of society has them working (income and GDP) and paying taxes, not a dead weight on society.

Do you think that the relaxation of marijuana laws might be a “Sign Post” (in scenario terms) that indicates a rapid drop in prison population? Also, super full employment, might be a solution all by itself. People, especially kids, who can get jobs and do something more productive, may be less inclined to get into drugs and mischief? There’s no reason why the Sign Post need to be only one, or even two signs. In fact, the crime system is just a sub-system of an economy. Multiple reinforcing systems can be really powerful.

If we do take other approaches to the incarceration system, what would those approaches be? And who (what businesses/industries) would benefit most?

What do you think? Is it time to get out of the criminal (in)justice system?

Resources

Half of the world’s incarcerated are in the US, China and Russia: http://news.bbc.co.uk/2/shared/spl/hi/uk/06/prisons/html/nn2page1.stm

Incarceration Rates: https://www.prisonpolicy.org/global/2018.html

US Against the world: https://www.statista.com/statistics/300986/incarceration-rates-in-oecd-countries/

New Yorker Article in Sept 2016 by Eric Markowitz, Making Profits on the Captive Prison Market.

How for-profit prisons have become the biggest lobby no one is talking about, by Michael Cohen in 2015.

Follow the money, in 2017, with a great infographic as to where all the prison moneys go.

Salt and Battery, When does Storage make Fossil Fuel Obsolete

Last week the world’s biggest Electric Vehicle (EV) battery company made a big opening splash on its IPO. CATL is a Chinese company that IPOed with a massive 44% pop on open. The company offered up only 10% of the shares in the IPO, valuing the company at more than $12B. China has limits on how much a company can IPO at (price based on PE ratio) and a 44% limit on the amount an IPO can rise in first day of trading. Expect this company to jump continually for some time. CATL is now the largest EV battery company in the world, primarily with lithium-ion for autos.

Of course, you can just use power as needed, when needed. With the rapid increase in efficiencies of wind (where the wind blows) and solar (where the sun shines) this is becoming ever-more critical. Once the infrastructure of transmission lines are in place, the renewable power plants are far more cost effective than any other options. Both wind and solar are now less than $.02 per KW, and the combined wind-solar is coming in at less than $.03. Such new power can come onboard in months, not years or decades required for other types of power.

Still, the problem is smoothing out the power for night time when the wind is not blowing. Thus the reliance on storage if we are to move to total renewables. If – well, when – the combined renewable energy and storage costs are lower than coal, oil and natgas, there will be no need for fossil fuels, except maybe for those places where the sun doesn’t shine (much) and the wind doesn’t blow (much).

There are many different options for storage of energy.

Fixed storage can be in the form of solar that moves water (back upstream to a dam that is above the existing hydro power system). It can use mirrors to focus heat for molten salt, for example.

The old lead battery technology has been tried and proved for a century and still is alive and well in the golf-carts.

Many players are after the battery storage market. GE is fighting hard against Tesla (powerwall battery built in their GigaFactories for fixed and battery packs for their cars) and Siemens. Storage options that are as good, or better, then lithium are coming fast to market for different applications. See a great view of new battery technologies in Pocket Lint. Batteries technologies that contain more carbon, nickel or cobalt seem very intriguing. Hydrogen options using fuel cell has been right at the edge of mass breakthrough into the market for decades.

When will certain storage options become a game-changer for existing “built economy” such as fossil fuels?

At some point, the combined renewable and storage will be sufficiently powerful and affordable to render the old fossil fuel options obsolete. McKinsey report discusses this massive drop in price and trend in their battery report. In 2010 battery storage cost about $1,000 per kilowatt hour of storage; their June 2017 report shows it at $230 per kwh in 2016 and dropping fast. It should be well below $200 per kwh now. (Batteries for the Telsa Model 3 are supposed to be at about $190 per kWh based on mass manufacturing; estimates based on SEC filings are for $157 kWh by 2020.)

So, what is the break-even point where storage becomes the game changer, and renewables with battery deflect the entire energy industry onto another course? Apparently, $125 per kWh is the disruptive price point. A scientist name Cadenza has developed battery technology at this price point using super cell and is now working on an extended version that includes the peripherals with the battery at, or below, the magical $125 kWh. She must demonstrate both cheaper and safer, so the housing is critical to avoid fires and short-circuits. “In March of this year, Cadenza published its report (pdf) saying that its super-cell technology can indeed hit that point.”

The technology is already here, yet new improvements are leap-frogging each competing option. How long before fossil fuels are an obsolete option? For just plain generation, fossils are dead and dying. Combined is where the war is won, however.

We argue that you really want to be careful with your oil and gas investments because you can find yourselves, like the oil patch (countries and companies and refiners) with stranded assets.

Moore’s law is at work in the battery complex. How long before combined renewables with storage supplants fossil fuels? Five years? Ten? Twenty?

Consensus too, outcomes and consensus

Consensus continues to be a big issue is designing a Delphi Study. It is more than a little helpful to figure out how the results will be presented and how consensus will be determined. Even if consensus is not really necessary, any and all Delphi studies will be looking for the level of agreement as a critical aspect of the research. Look at our prior blog article  Consensus: Let’s agree to look for agreement, not consensus. Hall (2009) talks about suggested approaches to consensus in the Delphi Primer including the RAND/UCLA approach used in medical protocol research. Hall said: “A joint effort by RAND and the University of California is illustrated in The RAND/UCLA appropriateness method user’s manual. (Fitch, Bernstein, Aguilar, Burnand, LaCalle, Lazaro, Loo, McDonnell, Vader & Kahan, 2001, RAND publication MR-1269) which provides guidelines for conducting research to identify the consensus from medical practitioners on treatment protocol that would be most appropriate for a specific diagnoses.”

In the medical world, agreement can be rather important. Burnam (2005) has a simple one page discussion about the RAND/UCLA method used in medical research. The key points by Burnam and the RAND/UCLA are:

  • Experts are readily obvious and selected by their outstanding works in the field. They may publish research on the disease in question and/or be a medical practitioner in the field (like a medical doctor).
  • The available research is organized and presented to the panel.
  • The RAND/UCLA method suggests the approach/method to reach consensus.
  • The goal is to recommend an “appropriate” protocol.

Appropriate is clear. Burnam says, “appropriate, means that the expected benefits of the health intervention outweigh the harms and inappropriate means that expected harms outweigh benefits. Only when a high degree of consensus among experts is found for appropriate ratings are these practices used to define measures of quality of care or health care performance.”

Burman compares and contrasts the medical protocol with an approach used by Addington et al. (2005)that includes many other factors (stakeholders). Seven different stakeholder groups were represented, therefore the performance measures selected by the panel to be important represented a broader spectrium. The Addington et al. study included other performance measures including various dimensions of patient functioning and quality of life, satisfaction with care, and costs.

Burman generally liked the addition of other factors, not just medical outcomes, saying that she applauds Addington et al. “for their efforts and progress in this regard. Too often clinical services and programs are evaluated only on the basis of what matters most to physicians (symptom reduction) or payers (costs) rather than what matters most to patients and families (functioning and quality of life).”

The two key take-aways from this comparison for researchers considering a Delphi Method research. Decide in advance how the results will be presented, and how consensus will be determined. If full consensus is really necessary – as in the case of a medical protocol – then fully understand that at the beginning of the research. Frequently, it is more important to know the level of importance for various factors in conjunction with the level of agreement. In business, management, etc., the practitioner can review the totality of the research in order to apply the findings as needed, where appropriate.

References

Addington, D., McKenzie, E., Addington, J., Patten, S., Smith, H., & Adair, C. (2005). Performance Measures for Early Psychosis Treatment Services. Psychiatric Services, 56(12), 1570–1582. doi:10.1176/appi.ps.56.12.1570

Burnam, A. (2005). Commentary: Selecting Performance Measures by Consensus: An Appropriate Extension of the Delphi Method? Psychiatric Services, 56(12), 1583–1583. doi:10.1176/appi.ps.56.12.1583

Fitch K., Bernstein S.J., Aguilar M.D., Burnand, B., LaCalle, J.R., Lazaro, P., Loo, M., McDonnell, J. & Vader, J.P., Kahan, J.P. (2001). The RAND/UCLA appropriateness method user’s manual. Santa Monica, CA: RAND Corporation. Document MR-1269. Retrieved July 3, 2009, from: http://www.rand.org/publications/

Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2. Research methodology (2nd ed., pp. 3-28). Las Vegas, NV: The Lentz Leadership Institute. (www.RefractiveThinker.com)

Scenarios of Stranded Assets in the Oil Patch

The researchers over at Strategic Business Planning Company have been contemplating scenarios that lead to the demise of oil. The first part of the scenario is beyond obvious. Oil (and coal) are non-renewable resources; they are not sustainable; burning fossil fuels will stop — eventually. It might cease ungracefully, and here are a few driving forces that suggest the cessation of oil could come sooner, not later. Stated differently, if you owned land that is valued based on carbon deposits, or if you owned oil stocks those assets could start to become worth less (or even worthless).

We won’t spend time on the global warming scenario and possible ramifications of government regulation and/or corporate climate change efforts. These could/would accelerate the change to renewables. There are other drivers away from fossil fuels including: National Security, Moore’s Law toward renewables; and, efficiency.

1. National Security. Think about all the terrorist groups and rogue countries. All of them get part, or all of their funding from oil (and to a lesser extent, NatGas and Coal). Russia. Iran. Lebanon, where the Russians have been enjoying the trouble they perpetuate. The rogue factions in Nigeria. Venezuela. Even Saudi is not really are best friend (15 of the 19 bombers on 911 were Saudi citizens). Imagine if the world could get off of fossil fuels. Imagine all the money that would be saved, by not having to defend one countries aggression on another if the valuable oil became irrelevant. Imagine how much everyone would save on military. This is more than possible with the current technology; but with Moore’s law of continuous improvement, it becomes even more so.

2. Moore’s Law. Moore’s law became the law of the land during the computer chip world, where technology is doubling every 18 months, and costs are reducing by half.  (See our blog on The Future of Computing is Taking on a Life of Its Own. After all these decades Moore’s law is finally hitting a wall.) In the renewable world, the price of solar is dropping dramatically, when the efficiency continues to increase. For example the increase of 30% on imported PV, matches the cost reductions of the last year. In the meanwhile battery efficiency is improving dramatically, year-over-year. Entire solar farms have been bid (and built) for about $.02 per kilowatt and wind and/or solar with battery backup is about $.03 per kilowatt. At that price, it is far cheaper to install renewable power vs coal or NatGas, especially given the years to create/develop for fossil fuel plants.

Note, that we haven’t even talked about peak coal and peak oil. Those concepts are alive and well, just that fracking technology has pushed them back maybe 10 years from a production supply-side perspective. At some point you hit the maximum possible production (on a non-renewable resource) and production can only go down (and prices go up) from there. The world production of oil is now up to 100m barrels per day.  But oil wells deplete at about 4%-5%, so you need 4% more new wells every year. Fracking drops about 25%-30% in the first year! So you need about many more wells each year to stay even. But let’s go on to efficiency and probably the major demand-side force.

3. Efficiency. The incandescent light bulb, produces very little light… it produces more than 95% heat, and just a tiny bit of light with 100 watts of energy. With only 10-15 watts an LED light can produce the same light was required 100 watts in days of old. The internal combustion engine is hugely inefficient, producing mostly (unused) heat and directly harnessing only 10-15% of energy from gas or diesel… plus it took huge amounts of energy to mine, transport, refine, transport, and retail the fuel. Electric engines are far more efficient, and they produce no toxic emissions. A great book that talks about energy, efficiency and trends is by Ayers & Ayers, Crossing the Energy Divide. The monster power plants (nuclear, coal, NatGas) have serious efficiency issues. They produce huge amounts of heat for steam turbines, but most of the heat is lost/wasted (lets say 50%). Electricity must be transmitted long distances through transmission lines (where up to 40% can be lost in transmission).

Producing power as needed, where needed, makes so much more sense in most cases. Right now, using today’s technology, pretty much everyone can produce most of their own power (PV or wind) at about the same cost as the power monopolies.  But Moore’s law is making the renewable technology better and better every year. Add some batteries and microgrid technology and you have robust electric systems.

The losers in these trends/scenarios can be the BIG oil companies and the electric monopolies. They will fight move until they change, or they lose. Just like peak oil, it is a mater of time… but the time is coming faster and faster…

Saudi is trying to keep prices high enough to complete their oil Initial Public Offering so they can diversify out of oil. Venezuela is offering a new cyber coin IPO (their Petro ICO) with barrels of buried oil as collateral (See Initial Kleptocurrency Offering). But what if that oil becomes a stranded asset? Your Petro currency becomes as worthless as the Venezuelan Bolivar.

You really want to carefully consider how much and how long you want to own fossil fuel assets… Fossil fuels may be dead in a decade or two… Moore or less.

Triangulation to augment your Qual study

Triangulation in research is a lot the old technology of geometry and surveying where you take the distance from three known points to compute the exact location on a map… Give or take a few yards. LORAN technology using radio signals and such was used in WWII. With a LORAN in the gulf, I remember being able to find where we were on a sail boat, approximately. The problem was that we were in an area of the Gulf of Mexico with only two LORAN readings. Three, you can triangulate, two you can approximate.

Triangulation in Academic Research is the kind of stuff you can possibly do to augment your Qual study. As discussed other places, Delphi Studies might need to be recharacterized as Mixed method if some of the research is sufficiently quantitative, i.e., if second round has a lot of respondents and it makes sense to do stats, like correlation on several variables.

So, in any qual study, you might consider including triangulation. There are a few types of triangulation (depending on your source) but let’s focus on just two: data and lit/theoretical. Data would be if you could find published statistics in the area that would allow for some corroboration of the findings from the study. In terms of data, maybe some stats that give an estimate of the independent and/or dependent variables (predictor and predicted variables in QUAL world). Possibly even the intersection of the two. Does the available data align with the findings of the study?

Internal data to a study should be kept separate from external data triangulation. In Delphi studies, for example, there might be an alignment of the more general findings from round 1 and rankings of round 2. This offers up internal consistency.

One of the coolest, and potentially strongest, aspects of triangulation is literature (or theory) triangulation. Does the existing literature align with some of the key themes found in your QUAL study. Think of this as a meta-study lite. For a meta study, there needs to be a lot of research, and a deep dive into the existing research can allow for a table of results that support, don’t support, or disprove various themes.

Here is a very interesting approach for triangulation within a Delphi study (Hopf, Francis, Helms, Haughney, & Bond, 2016). Find the article here at BMJopen. For past studies that did not address a specific topic, they used a bazaar label of “Silence”, as in not addressed in the specific study. A better label would probably not addressed (n.a.). (The implication of silence is that the authors intentionally avoided that specific issue in their study.)

So, consider including one of the 4 or 5 types of triangulation in your qual study to strengthen the support for your findings (or to highlight divergent findings). For the regular researcher (say dissertation), consider simply doing meta analysis, and avoid all that messy questionnaire stuff, if the field is full of existing research.

If you use Delphi, you will be able to project into the future. You can explore how some of the themes identified in the research grow or wane in an uncertain future, and what conditions (triggers) might initiate major future disruption, i.e., scenario analysis.

References

Hopf, Y. M., Francis, J., Helms, P. J., Haughney, J., & Bond, C. (2016). Core requirements for successful data linkage: an example of a triangulation method. BMJ Open, 6(10), e011879. doi:10.1136/bmjopen-2016-011879 Retrieved from: http://bmjopen.bmj.com/content/6/10/e011879

 

 

Qubit

The Future of Computing Is Taking on a Life of Its Own

Previously, we talked about the Tic-Toc of computing at Intel, and how Gordan’s law (Moore’s law) of computing – 18 months to double speed (and halve price) – is starting to hit a brick wall (Outa Time, the tic-toc of Intel and modern computing). Breaking through 14 nanometer barrier is a physical limitation inherent in silicon chips that will be hard to surpass. Ed Jordan’s dissertation addressed this limit and his Delphi study showed what the next technology might likely be, and how soon it might be viable. His study found that several technologies were looming on the horizon (likely less than 50 years)… and that organic (i.e. proteins) was the most promising, and should certainly happen sometime in the next 30 years.

Apparently quantum computing technology is here and now– kinda – especially at Google. See Nicas (2017) WSJ article about Quantum computing in the Future of Computing. As the article states about the expert Nevens, he’s pretty certain that no one understands quantum physics. At the atomic level, a qubit can be both on and off, at the same time. The conversation goes into parallel universes and such… Both here and there, simultaneously. The Quantum computer is run in zero gravity, at absolute zero temperature (give or take a fraction of a degree). Storage density using qubits is unimaginable. The computer works completely differently, however, based on elimination of the non-feasible to arrive at good answers, but not necessarily the best answer. Heuristics, kinda. The error rate is humongous, apparently, requiring maybe 100 qubits in error correction associated with a single working qubit.

Ed Jordan was reminiscing about quantum computing yesterday… “Basically, all computing in all its permutations need to be rethunk. Quantum computing is sort of the Holy Grail. One could argue it is sort of like control fusion: always just 10 years away. Ten years ago, it was 10 years away. Ten years from now it may still be ten years away. There is a truck load of money being thrown at it. But there isn’t anything mature enough yet to do anything that looks like real computing. The problem is how do you read out the results? Like Schrödinger’s cat, that qubit could be alive or dead, and by looking at it you cause different results to happen – as opposed to something that exists independent of your observation.”

Quantum computing is now moving past the technically impossible into the proved and functional, and maybe soon to be viable. The players in this market are Google (Alphabet), IBM and apparently the NSA (if whistle blower Snowden is to be believed.)

Intel may not be able to capitalize on the next generation of computing.  Some computations, such as breaking encryption, can probably be done in a couple seconds on a quantum computer, even though it might take multiple current silicone computers a lifetime. There are several potential uses of the quantum computer that make businesses and security targets very nervous.

Jordan and Hall (2016) talk about using Delphi to anticipate deflection points that are possible on the horizon, including those scenarios that would be possible via quantum computing, or bio-computing for that matter. The use of experts or informed people could make the search for such deflection points more evident, and the ability to develop contingency plans more effective.

One of the most interesting things in the Nicas article is a look at the breakthroughs in computing technology, and comparing them to Jordan’s 2010 dissertation. He found that two or three types of technology should likely be feasible within 25 to 40 years and viable in application within about 30 to 50 years. In his case that would be as early as about 2040. Note that the experts discussed by Nicas were pegged to have full application of a quantum computer by about 2026; that is when digital security will take on a whole new level of risk. It also makes you wonder how block-chain (bitcoin) will fare in the new age of supersonic computing.

This seems like a great time to start working of security safeguards that are not anything like the current technology? Can you imagine the return of no-tech or lo-tech? Kinda reminds you of the revival of the old “brick” phones for analog service (in the middle of the everglades).

References

Debnath, S., Linke, N. M., Figgatt, C., Landsman, K. A., Wright, K., & Monroe, C. (2016). Demonstration of a small programmable quantum computer with atomic qubits. Nature, 536(7614), 63–66. doi:10.1038/nature18648

Jordan, Edgar A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. (Doctoral Dissertation). Available from ProQuest dissertations and Theses database. (UMI No. 3442759)

Jordan, E. A., & Hall, E. B. (2016). Group decision making and Integrated Product Teams: An alternative approach using Delphi.  In C. A. Lentz (Ed.), The refractive thinker: Vol. 10. Effective business strategies for the defense sector. (pp. 1-20) Las Vegas, NV: The Refractive Thinker® Press. ISBN #: 978-0-9840054-5-1. Retrieved from: http://refractivethinker.com/chapters/rt-vol-x-ch-1-defense-sector-procurement-planning-a-delphi-augmented-approach-to-group-decision-making/

Nicas, Jack (2017, November/December). Welcome to the quantum age. The future of Computing in Wall Street Journal. Retrieved from: https://www.wsj.com/articles/how-googles-quantum-computer-could-change-the-world-1508158847

Scenarios that Jump Out At You

There are several scenarios that jump out at you.

Hall and Knab (2012) outlined 11 or so items that were non-sustainable trends/practices that appeared to have compounding and accelerating forces. Those items get worse in a wicked bad way when they go unattended. Therefore, they are wonderful areas to generate scenarios. Here’s a few: US Debt deficit, US Trade deficit, the interest rate bomb, Life-style bomb, the compounding healthcare costs escalation bomb, the fossil fuel energy bust (peak) or bomb (massive government intervention), and the single problem vs integrated problem dilemma.

There are a few more that jump out in current months. The news and its reliability keeps getting worse. Fake news has become a steady fact. And miss information is well ahead of good, reliable journalism. The SustainZine blog wonders if this is not the time for WikiTribune approach to journalism. There’s many ways that the broken news system can go: from really bad, to even worse; or to use the leverage of computers, networking and crowds to purify it (if only a little). There’s probably no situation where the regular media world of  news, near-news, or fake-news will stay the same as it has evolved in 2016 and 2017.

Another scenario rich environment is global warming, renewables and fossil fuels. While companies have been steadily getting on-board with the idea that they need to start aiming for sustainable business models, the politics has gotten into a kink. While China and India have made a massive about-face on the Paris conference and actions toward thwarting off Global Warming; the US under Trump is about to go the other way. With the tug-of-war from the deniers and the greenies, it seem likely that something big is about to give. One side will lose and get pulled in, the other side will win, or the rope will snap. If the greenies are right, the world warming will get very bad, very quickly… that’s ugly, but interesting. There will be a lot of oil and gas and coal that will be rendered useless because it can’t be (shouldn’t be) burned. If the deniers are right, the oil, gas and coal companies have many more decades to enjoy unfettered combustion. And Ha Ha to those foolish fear-mongers in Paris.

Inflation. The US, with $20T in debt on a $19T sized economy based on GDP, currently pays about 9% of all government revenues in interest. (Revenue seems like the wrong word to use for government inflows.) That is at near zero interest rates. When inflation goes up, the fed will end up paying much, even all of the revenues toward servicing the debt. At just over 10% interest, the Fed would pay almost all revenues toward servicing the debt. Nothing for Medicare, SSI, or Military. Oh, and it has been about 8 years, since we have had a good recession, which happens on average every 7 years.

Gold is an interesting scenario. Lot’s of people thought it would shoot off into space, for many reasons. The US Dollar is strong because it is the best shanty in the slum neighborhood, so gold should look good, relatively.  But maybe a crypto currency like Bitcoin, massive federal government interventions around the world –and other factors — have taken the luster off of what some consider the most secure investment in the world?

What other scenarios do you see looming?

What’s the best way for a business to prepare for some of these scenarios that loom large?

References

Hall, E., & Knab, E.F. (2012, July). Social irresponsibility provides opportunity for the win-win-win of Sustainable Leadership. In C. A. Lentz (Ed.), The Refractive Thinker: Vol. 7. Social responsibility (pp. 197-220). Las Vegas, NV: The Lentz Leadership Institute.
(Available from www.RefractiveThinker.com, ISBN: 978-0-9840054-2-0)

Consensus: Let’s agree to look for agreement, not consensus

Most of the hunters (academic researchers) searching for consensus in their Delphi research, are new to the sport. They believe that they must bag really big game or come home empty handed. But we don’t agree. In fact, once you have had a chance to experience Delphi hunting once or twice, your perception of the game changes.

Consensus is a BIG dilemma within Delphi research. However, it is generally an unnecessary consumer of time and energy. The original Delphi Technique used by the RAND Corporation wanted to aim for consensus in many cases. That is, the U.S. government could either enter an nuclear arms race or not; there really was no middle ground.  Consequently, it was counterproductive to build a technique that could not reach consensus.  It became binary: reach consensus and a plan could be recommended to the president; no consensus, and this too was useful, but less helpful, to inform the president. (The knowledge that the experts could not come up with a clear path forward, when exerting a structured assessment process, is also very good to know.)

Consensus. The consensus process – getting teams of experts to think through complex problems and come up with the best solutions – is critical to effective teamwork and to the Delphi process. In most cases, however, it is not necessary – or even desirable – to come up with the one and only best solution. So long as there is no confusion about the facts and the issues, forcing a consensus when there is none is counter-productive (Fink, Kosecoff, Chassin & Brook, 1984; Hall, 2009, pp. 20-21).

Table 1 shows the general characteristics of various types of nominal group study techniques (Hall & Jordan, 2013, p. 106). Note that the so called traditional Delphi Technique and the UCLA-RAND appropriateness approaches aim for consensus. The so call Modified Delphi might not search for consensus and might not utilize experts. Researchers use the UCLA-RAND approach extensively to look for the best medial treatment protocol when only limited data is available, relying heavily on the expertise of the doctors involved to suggest – sometimes based on their best and informed guess – what protocol might work best. The doctors can only recommend one protocol. Consensus is needed here. 

(Table reprinted with permission Hall and Jordan (2013), p. 106).

But consensus is rarely needed, although it is usually found, to some degree, in business research, and even in most academic research. For example, the most important factors may be best business practices. Of the total list of 10 to 30 factors, few are MOST important. Often, the second round of Delphi aims to prioritize those qualitative factors identified in round 1. There factors are usually natural separation points between the most important (e.g. 4.5 out of 5), those that are medium important (3 out of 5), and the low importance factors.

Those researchers who are fixated on consensus might spend time, maybe a lot of time, trying to find that often elusive component called consensus. There are usually varying levels of agreement. Five doctors might agree on one single best protocol, but 10 probably won’t, unanimously. Interestingly, as the number of participants increase, the ability to talk statistically significantly about the results increases; however, the likelihood of pure, 100% consensus, diminishes. For example, a very small study of five doctors reaches unanimous consensus; but when it is repeated with 30 doctors, there is only 87% agreement. Obviously, one would prefer the quantitative and statistically significant results from the second study. (Usually you are forecasting with Delphi; 100% agreement implies a degree of certainty in an uncertain future, essentially this can easily result in a misapplication of a very useful planning/research tool.)

This brings us to qualitative Delphi vs. a more quantitative, mixed-method, Delphi. Usually Delphi is considered QUAL for several reasons. It works with a small number of informed, or expert, panelists. It usually gathers qualitative information in round 1. However, the qualitative responses are prioritized and/or ranked and/or correlated in round 2, round 3, etc. If a larger sample of participants results in 30 or more respondents in round 2, then the study probably should be upgraded from a purely qualitative study to mixed-method. That is, if the level of quantitative information gathered in round 2 is sufficient, statistical analysis can be meaningfully applied. Then you would look for statistical results (central tendency, dispersion, and maybe even correlation). You will find a confidence interval for all of your factors, those that are very important (say 8 or higher out of 10, +/- 1.5) and those that aren’t important. In this way, you could find those factors that are both important and statistically more important than other factors: a great time to declare a “consensus” victory.

TIP: Consider using more detailed scales. As 5-point Likert-type scale will not provide the same statistical detail as a 7-point, 10-point or maybe even a ratio 100% scale if it makes sense.

Subsequently, in the big game hunt for consensus, most hunters continue to look for the long-extinct woolly mammoth. Maybe they should “modify” their Delphi game for an easier search for success instead . . .

What do you think?

References

Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2: Research Methodology, (pp. 3-27). Las Vegas, NV: The Refractive Thinker® Press. Retrieved from: http://www.RefractiveThinker.com/

Hall, E. B., & Jordan, E. A. (2013). Strategic and scenario planning using Delphi: Long-term and rapid planning utilizing the genius of crowds. In C. A. Lentz (Ed.), The refractive thinker: Vol. II. Research methodology (3rd ed.). (pp. 103-123) Las Vegas, NV: The Refractive Thinker® Press.

Intel and Mobile Computing: An Eye on BIG Computing on the Move

We are rapidly moving to one of the most disruptive innovations in modern computing. Truly mobile computing. The Driver-less car. These cars are going to have a lot of computing power on-board. They will need to be self contained, after all, if going through a tunnel or parking lot. But they will be amassing massive amounts of data as well, 4 terabytes of data per day for the average self-driving car. Wow. And current mobile data plans start to charge you or throttle you after about 10MB of data usage per month.

Read about this in a great WSJ article by Greenwald on March 13. It focuses on the companies in play and the new bid by Intel to buy MobilEye for $15.3B, the look-around and self driving technology going into GM, VW & Honda cars. The 34% premium shows how important this tech is to the slumbering Tel Giant.

What’s all the fuss about driver-less cars? How does going Driver-Less impact the future: what are potential interruptions, problems and/or discontinuities? How could this technology alter the strategic plans for many market leaders?

It seems likely that the majority of Americans will reject using/supporting driver-less vehicles… for a while.  It removes individual control, emasculates the sense of manly power while removing decision making.  One cannot demonstrate a charged-up ego to a potential partner when a computer and sensors are driving the speed limit behind a school bus.  A driver can suddenly opt for a shortcut or a scenic route that he knows by heart.  Not so the driver-less vehicle. However, Tesla drivers have already been reprimanded, for letting the car do too much of the driving, under too many unusual circumstances.

Just a few things to think further about: Long-Haul Trucking and Enabling Technologies.

Long-haul trucking. There is a major shortage of truck drivers. Labor rules don’t let drivers do long hauls without breaks or rest. So long haul driving often uses two drivers for the same truck that is going coast to coast. If the truck needs to stop and drop along the way, however, then a person on-board, might still be necessary. However drops and pickups usually have someone there at the warehouse who can assist. How will the truck fuel itself up at the Flying-J truck stops? If we can fuel up fighter jets in mid air, we can figure out how to fuel up a driver-less truck. One obvious solutions – or not so obvious, if you’re not in the habit of longer-term and sustainable thinking – is to move to electric trucks and a charging pad. Simply drive the electric truck over a rapid-charging pad. Rapid-charge technology is already generally available using current technologies (especially with minor improvements in batteries and charging).

Enabling Technology Units (ETUs).  The MobilEye-types of technology apply to lots and lots of other situations, such as trucks, farm tractors, forklifts, etc. Much of the technology being developed for the driver-less car is what Hall & Hinkelman (2013) refer to as Enabling Technology Units (ETUs) in their Guide book to Patent Commercialization. The base technologies have many and broad based applications beyond the obvious direct market application. It is the Internet of things, when the “things” are mobile, or when the “things” around it are mobile, or both. This is an interesting future of mobile computing.

References

Hall, E. B. & Hinkelman, R. M. (2013). Perpetual Innovation™: A guide to strategic planning, patent commercialization and enduring competitive advantage, Version 2.0. Morrisville, NC: LuLu Press. ISBN: 978-1-304-11687-1  Retrieved from: http://www.lulu.com/spotlight/SBPlan

Page 3 of 4

Powered by WordPress & Theme by Anders Norén