MountainScenarios

Category: Delphi Method Page 1 of 2

Beyond Moore’s law, Beyond Silicone Chips

Beyond Moore’s law (by Dr Ed Jordan)

After almost 60 years, Moore’s law, related to the doubling of computing power every year-and-a-half-ish, still holds. At the current exponential speed, there is a brick wall looming in the foreground: the physical limitations of silicon chips. The most straightforward example of how that might impact a company is to look at Intel Corp. But first more on Moore’s law and the more general idea of learning curves.

Democratization of Power

SustainZine (SustainZine.com) blogged about a rather cool idea on the decentralization of power (here). The idea in Nature Communications is to have buildings everywhere use their renewable power sources to generate a biofuel of some type. And the authors had the Heating Ventilation and Air Conditioning (HVAC) unit extract CO2 from the atmosphere to generate the fuel. Some of the technologies they pointed to were new-er technologies that are now (hopefully) making their way into main-stream. (Read the nice summary article in Scientific American by Richard Conniff.)

Basically, everyone everywhere can now produce their own power at rates that are a fraction of lifelong utility power. Storage is now the big bottle neck to completely avoiding the grid. The distributed power should only be a big plus to the overall power grid; however, the existing power monopolies are still resisting and blocking. So complete self-containment is not only a necessity for remote (isolated) power needs, but a requirement in order to break away from the power monopolies.

In the US, there is the 30% Renewable Investment Tax Credit which makes an already good investment even better for homeowners and businesses. Plus, businesses can get accelerated depreciation making the investment crazy profitable after accounting for the tax shield (tax rate times the basis of the investment). Many of the states also sweeten the deal even more. But the 30% tax credit starts to reduce after 2019, so the move to renewable starts to drop off precipitously at the end of 2019.

You would think that the power companies would join in the solutions, and not spend so much time (and massive amounts of money) on obstructing progress. All those tall buildings that are prime candidates for wind. Think of all the rooftops, roads and parking lots worldwide that are prime candidates for solar. Distributed power. As needed, where needed. No need for new nuclear, coal or nat-gas power plants. Little need for taking up green fields with solar farms.

Of course, the oil, coal and gas companies need the perpetual dependence on the existing infrastructure. When we all stop the traditional fossil fuel train — and all indications from the IPCC show that we must stop that train sooner, not later — then all the oil and gas in the world will need to stay in the ground. Call me an optimist, or a pessimist, but I would not buy oil or gas for almost any price. I definitely wouldn’t buy into the Saudi-owned oil company spinoff.

It is probably a mistake to think that technology to take CO2 out of the atmosphere after the fact can repair past sins. Avoiding putting pollution into the air, water and land — the negawatt and the negagallon, in this case — are by far the best approach.

In Sustainzine, BizMan concluded with this thought about the here-and-now scenario, not in the future at all:

“Hidden in this whole discussion is that scenario that is here and now, not futuristic. Renewable energy is cheaper and massively cleaner than conventional energy, and it can be located anywhere. Storage, in some form, is really the bottleneck; and storage in the form of synthetic fuels is a really, really cool (partial) solution.

References

Dittmeyer, R., Klumpp, M., Kant, P., & Ozin, G. (2019, April 30). Crowd oil not crude oil. Nature Communications. DOI: 10.1038/s41467-019-09685-x

Consensus too, outcomes and consensus

Consensus continues to be a big issue is designing a Delphi Study. It is more than a little helpful to figure out how the results will be presented and how consensus will be determined. Even if consensus is not really necessary, any and all Delphi studies will be looking for the level of agreement as a critical aspect of the research. Look at our prior blog article  Consensus: Let’s agree to look for agreement, not consensus. Hall (2009) talks about suggested approaches to consensus in the Delphi Primer including the RAND/UCLA approach used in medical protocol research. Hall said: “A joint effort by RAND and the University of California is illustrated in The RAND/UCLA appropriateness method user’s manual. (Fitch, Bernstein, Aguilar, Burnand, LaCalle, Lazaro, Loo, McDonnell, Vader & Kahan, 2001, RAND publication MR-1269) which provides guidelines for conducting research to identify the consensus from medical practitioners on treatment protocol that would be most appropriate for a specific diagnoses.”

In the medical world, agreement can be rather important. Burnam (2005) has a simple one page discussion about the RAND/UCLA method used in medical research. The key points by Burnam and the RAND/UCLA are:

  • Experts are readily obvious and selected by their outstanding works in the field. They may publish research on the disease in question and/or be a medical practitioner in the field (like a medical doctor).
  • The available research is organized and presented to the panel.
  • The RAND/UCLA method suggests the approach/method to reach consensus.
  • The goal is to recommend an “appropriate” protocol.

Appropriate is clear. Burnam says, “appropriate, means that the expected benefits of the health intervention outweigh the harms and inappropriate means that expected harms outweigh benefits. Only when a high degree of consensus among experts is found for appropriate ratings are these practices used to define measures of quality of care or health care performance.”

Burman compares and contrasts the medical protocol with an approach used by Addington et al. (2005)that includes many other factors (stakeholders). Seven different stakeholder groups were represented, therefore the performance measures selected by the panel to be important represented a broader spectrium. The Addington et al. study included other performance measures including various dimensions of patient functioning and quality of life, satisfaction with care, and costs.

Burman generally liked the addition of other factors, not just medical outcomes, saying that she applauds Addington et al. “for their efforts and progress in this regard. Too often clinical services and programs are evaluated only on the basis of what matters most to physicians (symptom reduction) or payers (costs) rather than what matters most to patients and families (functioning and quality of life).”

The two key take-aways from this comparison for researchers considering a Delphi Method research. Decide in advance how the results will be presented, and how consensus will be determined. If full consensus is really necessary – as in the case of a medical protocol – then fully understand that at the beginning of the research. Frequently, it is more important to know the level of importance for various factors in conjunction with the level of agreement. In business, management, etc., the practitioner can review the totality of the research in order to apply the findings as needed, where appropriate.

References

Addington, D., McKenzie, E., Addington, J., Patten, S., Smith, H., & Adair, C. (2005). Performance Measures for Early Psychosis Treatment Services. Psychiatric Services, 56(12), 1570–1582. doi:10.1176/appi.ps.56.12.1570

Burnam, A. (2005). Commentary: Selecting Performance Measures by Consensus: An Appropriate Extension of the Delphi Method? Psychiatric Services, 56(12), 1583–1583. doi:10.1176/appi.ps.56.12.1583

Fitch K., Bernstein S.J., Aguilar M.D., Burnand, B., LaCalle, J.R., Lazaro, P., Loo, M., McDonnell, J. & Vader, J.P., Kahan, J.P. (2001). The RAND/UCLA appropriateness method user’s manual. Santa Monica, CA: RAND Corporation. Document MR-1269. Retrieved July 3, 2009, from: http://www.rand.org/publications/

Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2. Research methodology (2nd ed., pp. 3-28). Las Vegas, NV: The Lentz Leadership Institute. (www.RefractiveThinker.com)

Scenarios of Stranded Assets in the Oil Patch

The researchers over at Strategic Business Planning Company have been contemplating scenarios that lead to the demise of oil. The first part of the scenario is beyond obvious. Oil (and coal) are non-renewable resources; they are not sustainable; burning fossil fuels will stop — eventually. It might cease ungracefully, and here are a few driving forces that suggest the cessation of oil could come sooner, not later. Stated differently, if you owned land that is valued based on carbon deposits, or if you owned oil stocks those assets could start to become worth less (or even worthless).

We won’t spend time on the global warming scenario and possible ramifications of government regulation and/or corporate climate change efforts. These could/would accelerate the change to renewables. There are other drivers away from fossil fuels including: National Security, Moore’s Law toward renewables; and, efficiency.

1. National Security. Think about all the terrorist groups and rogue countries. All of them get part, or all of their funding from oil (and to a lesser extent, NatGas and Coal). Russia. Iran. Lebanon, where the Russians have been enjoying the trouble they perpetuate. The rogue factions in Nigeria. Venezuela. Even Saudi is not really are best friend (15 of the 19 bombers on 911 were Saudi citizens). Imagine if the world could get off of fossil fuels. Imagine all the money that would be saved, by not having to defend one countries aggression on another if the valuable oil became irrelevant. Imagine how much everyone would save on military. This is more than possible with the current technology; but with Moore’s law of continuous improvement, it becomes even more so.

2. Moore’s Law. Moore’s law became the law of the land during the computer chip world, where technology is doubling every 18 months, and costs are reducing by half.  (See our blog on The Future of Computing is Taking on a Life of Its Own. After all these decades Moore’s law is finally hitting a wall.) In the renewable world, the price of solar is dropping dramatically, when the efficiency continues to increase. For example the increase of 30% on imported PV, matches the cost reductions of the last year. In the meanwhile battery efficiency is improving dramatically, year-over-year. Entire solar farms have been bid (and built) for about $.02 per kilowatt and wind and/or solar with battery backup is about $.03 per kilowatt. At that price, it is far cheaper to install renewable power vs coal or NatGas, especially given the years to create/develop for fossil fuel plants.

Note, that we haven’t even talked about peak coal and peak oil. Those concepts are alive and well, just that fracking technology has pushed them back maybe 10 years from a production supply-side perspective. At some point you hit the maximum possible production (on a non-renewable resource) and production can only go down (and prices go up) from there. The world production of oil is now up to 100m barrels per day.  But oil wells deplete at about 4%-5%, so you need 4% more new wells every year. Fracking drops about 25%-30% in the first year! So you need about many more wells each year to stay even. But let’s go on to efficiency and probably the major demand-side force.

3. Efficiency. The incandescent light bulb, produces very little light… it produces more than 95% heat, and just a tiny bit of light with 100 watts of energy. With only 10-15 watts an LED light can produce the same light was required 100 watts in days of old. The internal combustion engine is hugely inefficient, producing mostly (unused) heat and directly harnessing only 10-15% of energy from gas or diesel… plus it took huge amounts of energy to mine, transport, refine, transport, and retail the fuel. Electric engines are far more efficient, and they produce no toxic emissions. A great book that talks about energy, efficiency and trends is by Ayers & Ayers, Crossing the Energy Divide. The monster power plants (nuclear, coal, NatGas) have serious efficiency issues. They produce huge amounts of heat for steam turbines, but most of the heat is lost/wasted (lets say 50%). Electricity must be transmitted long distances through transmission lines (where up to 40% can be lost in transmission).

Producing power as needed, where needed, makes so much more sense in most cases. Right now, using today’s technology, pretty much everyone can produce most of their own power (PV or wind) at about the same cost as the power monopolies.  But Moore’s law is making the renewable technology better and better every year. Add some batteries and microgrid technology and you have robust electric systems.

The losers in these trends/scenarios can be the BIG oil companies and the electric monopolies. They will fight move until they change, or they lose. Just like peak oil, it is a mater of time… but the time is coming faster and faster…

Saudi is trying to keep prices high enough to complete their oil Initial Public Offering so they can diversify out of oil. Venezuela is offering a new cyber coin IPO (their Petro ICO) with barrels of buried oil as collateral (See Initial Kleptocurrency Offering). But what if that oil becomes a stranded asset? Your Petro currency becomes as worthless as the Venezuelan Bolivar.

You really want to carefully consider how much and how long you want to own fossil fuel assets… Fossil fuels may be dead in a decade or two… Moore or less.

Triangulation to augment your Qual study

Triangulation in research is a lot the old technology of geometry and surveying where you take the distance from three known points to compute the exact location on a map… Give or take a few yards. LORAN technology using radio signals and such was used in WWII. With a LORAN in the gulf, I remember being able to find where we were on a sail boat, approximately. The problem was that we were in an area of the Gulf of Mexico with only two LORAN readings. Three, you can triangulate, two you can approximate.

Triangulation in Academic Research is the kind of stuff you can possibly do to augment your Qual study. As discussed other places, Delphi Studies might need to be recharacterized as Mixed method if some of the research is sufficiently quantitative, i.e., if second round has a lot of respondents and it makes sense to do stats, like correlation on several variables.

So, in any qual study, you might consider including triangulation. There are a few types of triangulation (depending on your source) but let’s focus on just two: data and lit/theoretical. Data would be if you could find published statistics in the area that would allow for some corroboration of the findings from the study. In terms of data, maybe some stats that give an estimate of the independent and/or dependent variables (predictor and predicted variables in QUAL world). Possibly even the intersection of the two. Does the available data align with the findings of the study?

Internal data to a study should be kept separate from external data triangulation. In Delphi studies, for example, there might be an alignment of the more general findings from round 1 and rankings of round 2. This offers up internal consistency.

One of the coolest, and potentially strongest, aspects of triangulation is literature (or theory) triangulation. Does the existing literature align with some of the key themes found in your QUAL study. Think of this as a meta-study lite. For a meta study, there needs to be a lot of research, and a deep dive into the existing research can allow for a table of results that support, don’t support, or disprove various themes.

Here is a very interesting approach for triangulation within a Delphi study (Hopf, Francis, Helms, Haughney, & Bond, 2016). Find the article here at BMJopen. For past studies that did not address a specific topic, they used a bazaar label of “Silence”, as in not addressed in the specific study. A better label would probably not addressed (n.a.). (The implication of silence is that the authors intentionally avoided that specific issue in their study.)

So, consider including one of the 4 or 5 types of triangulation in your qual study to strengthen the support for your findings (or to highlight divergent findings). For the regular researcher (say dissertation), consider simply doing meta analysis, and avoid all that messy questionnaire stuff, if the field is full of existing research.

If you use Delphi, you will be able to project into the future. You can explore how some of the themes identified in the research grow or wane in an uncertain future, and what conditions (triggers) might initiate major future disruption, i.e., scenario analysis.

References

Hopf, Y. M., Francis, J., Helms, P. J., Haughney, J., & Bond, C. (2016). Core requirements for successful data linkage: an example of a triangulation method. BMJ Open, 6(10), e011879. doi:10.1136/bmjopen-2016-011879 Retrieved from: http://bmjopen.bmj.com/content/6/10/e011879

 

 

Qubit

The Future of Computing Is Taking on a Life of Its Own

Previously, we talked about the Tic-Toc of computing at Intel, and how Gordan’s law (Moore’s law) of computing – 18 months to double speed (and halve price) – is starting to hit a brick wall (Outa Time, the tic-toc of Intel and modern computing). Breaking through 14 nanometer barrier is a physical limitation inherent in silicon chips that will be hard to surpass. Ed Jordan’s dissertation addressed this limit and his Delphi study showed what the next technology might likely be, and how soon it might be viable. His study found that several technologies were looming on the horizon (likely less than 50 years)… and that organic (i.e. proteins) was the most promising, and should certainly happen sometime in the next 30 years.

Apparently quantum computing technology is here and now– kinda – especially at Google. See Nicas (2017) WSJ article about Quantum computing in the Future of Computing. As the article states about the expert Nevens, he’s pretty certain that no one understands quantum physics. At the atomic level, a qubit can be both on and off, at the same time. The conversation goes into parallel universes and such… Both here and there, simultaneously. The Quantum computer is run in zero gravity, at absolute zero temperature (give or take a fraction of a degree). Storage density using qubits is unimaginable. The computer works completely differently, however, based on elimination of the non-feasible to arrive at good answers, but not necessarily the best answer. Heuristics, kinda. The error rate is humongous, apparently, requiring maybe 100 qubits in error correction associated with a single working qubit.

Ed Jordan was reminiscing about quantum computing yesterday… “Basically, all computing in all its permutations need to be rethunk. Quantum computing is sort of the Holy Grail. One could argue it is sort of like control fusion: always just 10 years away. Ten years ago, it was 10 years away. Ten years from now it may still be ten years away. There is a truck load of money being thrown at it. But there isn’t anything mature enough yet to do anything that looks like real computing. The problem is how do you read out the results? Like Schrödinger’s cat, that qubit could be alive or dead, and by looking at it you cause different results to happen – as opposed to something that exists independent of your observation.”

Quantum computing is now moving past the technically impossible into the proved and functional, and maybe soon to be viable. The players in this market are Google (Alphabet), IBM and apparently the NSA (if whistle blower Snowden is to be believed.)

Intel may not be able to capitalize on the next generation of computing.  Some computations, such as breaking encryption, can probably be done in a couple seconds on a quantum computer, even though it might take multiple current silicone computers a lifetime. There are several potential uses of the quantum computer that make businesses and security targets very nervous.

Jordan and Hall (2016) talk about using Delphi to anticipate deflection points that are possible on the horizon, including those scenarios that would be possible via quantum computing, or bio-computing for that matter. The use of experts or informed people could make the search for such deflection points more evident, and the ability to develop contingency plans more effective.

One of the most interesting things in the Nicas article is a look at the breakthroughs in computing technology, and comparing them to Jordan’s 2010 dissertation. He found that two or three types of technology should likely be feasible within 25 to 40 years and viable in application within about 30 to 50 years. In his case that would be as early as about 2040. Note that the experts discussed by Nicas were pegged to have full application of a quantum computer by about 2026; that is when digital security will take on a whole new level of risk. It also makes you wonder how block-chain (bitcoin) will fare in the new age of supersonic computing.

This seems like a great time to start working of security safeguards that are not anything like the current technology? Can you imagine the return of no-tech or lo-tech? Kinda reminds you of the revival of the old “brick” phones for analog service (in the middle of the everglades).

References

Debnath, S., Linke, N. M., Figgatt, C., Landsman, K. A., Wright, K., & Monroe, C. (2016). Demonstration of a small programmable quantum computer with atomic qubits. Nature, 536(7614), 63–66. doi:10.1038/nature18648

Jordan, Edgar A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. (Doctoral Dissertation). Available from ProQuest dissertations and Theses database. (UMI No. 3442759)

Jordan, E. A., & Hall, E. B. (2016). Group decision making and Integrated Product Teams: An alternative approach using Delphi.  In C. A. Lentz (Ed.), The refractive thinker: Vol. 10. Effective business strategies for the defense sector. (pp. 1-20) Las Vegas, NV: The Refractive Thinker® Press. ISBN #: 978-0-9840054-5-1. Retrieved from: http://refractivethinker.com/chapters/rt-vol-x-ch-1-defense-sector-procurement-planning-a-delphi-augmented-approach-to-group-decision-making/

Nicas, Jack (2017, November/December). Welcome to the quantum age. The future of Computing in Wall Street Journal. Retrieved from: https://www.wsj.com/articles/how-googles-quantum-computer-could-change-the-world-1508158847

Consensus: Let’s agree to look for agreement, not consensus

Most of the hunters (academic researchers) searching for consensus in their Delphi research, are new to the sport. They believe that they must bag really big game or come home empty handed. But we don’t agree. In fact, once you have had a chance to experience Delphi hunting once or twice, your perception of the game changes.

Consensus is a BIG dilemma within Delphi research. However, it is generally an unnecessary consumer of time and energy. The original Delphi Technique used by the RAND Corporation wanted to aim for consensus in many cases. That is, the U.S. government could either enter an nuclear arms race or not; there really was no middle ground.  Consequently, it was counterproductive to build a technique that could not reach consensus.  It became binary: reach consensus and a plan could be recommended to the president; no consensus, and this too was useful, but less helpful, to inform the president. (The knowledge that the experts could not come up with a clear path forward, when exerting a structured assessment process, is also very good to know.)

Consensus. The consensus process – getting teams of experts to think through complex problems and come up with the best solutions – is critical to effective teamwork and to the Delphi process. In most cases, however, it is not necessary – or even desirable – to come up with the one and only best solution. So long as there is no confusion about the facts and the issues, forcing a consensus when there is none is counter-productive (Fink, Kosecoff, Chassin & Brook, 1984; Hall, 2009, pp. 20-21).

Table 1 shows the general characteristics of various types of nominal group study techniques (Hall & Jordan, 2013, p. 106). Note that the so called traditional Delphi Technique and the UCLA-RAND appropriateness approaches aim for consensus. The so call Modified Delphi might not search for consensus and might not utilize experts. Researchers use the UCLA-RAND approach extensively to look for the best medial treatment protocol when only limited data is available, relying heavily on the expertise of the doctors involved to suggest – sometimes based on their best and informed guess – what protocol might work best. The doctors can only recommend one protocol. Consensus is needed here. 

(Table reprinted with permission Hall and Jordan (2013), p. 106).

But consensus is rarely needed, although it is usually found, to some degree, in business research, and even in most academic research. For example, the most important factors may be best business practices. Of the total list of 10 to 30 factors, few are MOST important. Often, the second round of Delphi aims to prioritize those qualitative factors identified in round 1. There factors are usually natural separation points between the most important (e.g. 4.5 out of 5), those that are medium important (3 out of 5), and the low importance factors.

Those researchers who are fixated on consensus might spend time, maybe a lot of time, trying to find that often elusive component called consensus. There are usually varying levels of agreement. Five doctors might agree on one single best protocol, but 10 probably won’t, unanimously. Interestingly, as the number of participants increase, the ability to talk statistically significantly about the results increases; however, the likelihood of pure, 100% consensus, diminishes. For example, a very small study of five doctors reaches unanimous consensus; but when it is repeated with 30 doctors, there is only 87% agreement. Obviously, one would prefer the quantitative and statistically significant results from the second study. (Usually you are forecasting with Delphi; 100% agreement implies a degree of certainty in an uncertain future, essentially this can easily result in a misapplication of a very useful planning/research tool.)

This brings us to qualitative Delphi vs. a more quantitative, mixed-method, Delphi. Usually Delphi is considered QUAL for several reasons. It works with a small number of informed, or expert, panelists. It usually gathers qualitative information in round 1. However, the qualitative responses are prioritized and/or ranked and/or correlated in round 2, round 3, etc. If a larger sample of participants results in 30 or more respondents in round 2, then the study probably should be upgraded from a purely qualitative study to mixed-method. That is, if the level of quantitative information gathered in round 2 is sufficient, statistical analysis can be meaningfully applied. Then you would look for statistical results (central tendency, dispersion, and maybe even correlation). You will find a confidence interval for all of your factors, those that are very important (say 8 or higher out of 10, +/- 1.5) and those that aren’t important. In this way, you could find those factors that are both important and statistically more important than other factors: a great time to declare a “consensus” victory.

TIP: Consider using more detailed scales. As 5-point Likert-type scale will not provide the same statistical detail as a 7-point, 10-point or maybe even a ratio 100% scale if it makes sense.

Subsequently, in the big game hunt for consensus, most hunters continue to look for the long-extinct woolly mammoth. Maybe they should “modify” their Delphi game for an easier search for success instead . . .

What do you think?

References

Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2: Research Methodology, (pp. 3-27). Las Vegas, NV: The Refractive Thinker® Press. Retrieved from: http://www.RefractiveThinker.com/

Hall, E. B., & Jordan, E. A. (2013). Strategic and scenario planning using Delphi: Long-term and rapid planning utilizing the genius of crowds. In C. A. Lentz (Ed.), The refractive thinker: Vol. II. Research methodology (3rd ed.). (pp. 103-123) Las Vegas, NV: The Refractive Thinker® Press.

Scenarios Now and the Genius (hidden) within Crowd

It’s been about 10 years since the Great Recession of 2007-2008. (It formally started in December of 2007.) A 2009 McKinsey study showed that CEOs wished that they had done more scenario planning that would have made them more flexible and resilient through the great recession. In a 2011 article, Hall (2011) discusses the genius of crowds and group planning – especially scenario planning.

The Hall article spent a lot of time assessing group collaboration, especially utilizing the power available via the Internet. Wikipedia is one of the greatest collaboration – and most successful – tools of all time. It is a non-profit that invokes millions of volunteers daily to add content and regulate the quality of the facts. In this day of faus news, Wikipedia is a stable island in the turbulent ocean of content. Anyone who has corrections to make to any page (called article) is encouraged to do so. However, the corrections need to fact-based and source rich. Unlike a typical wiki, where anything goes, the quality of content is very tightly controlled.  As new information and research comes out on a topic, Wikipedia articles usually reflect those changes quickly and accurately. Bogus information usually doesn’t make it in, and bias writing is usually flagged. Sources are requested when an unsubstantiated fact is presented.

Okay, that’s one of the best ways to use crowds. People with an active interest – and maybe even a high level of expertise – update the content. But what happens when the crowd is a group of laypeople. Jay Leno made an entire career from the “wisdom” of people on the street when he was out Jay Walking. The lack of general knowledge in many areas is staggering.  Info about the latest scandal or gossip by celebs, on the other hand, might be really well circulated. So how can you gather information from a crowd of people where the crowd may be generally wrong?

It turns out that researchers at MIT and Princeton have figured out how to use statistics to figure out when the crowd is right and when the informed minority is much more accurate (Prelec, Seung & McCoy, 2017).  (See a Daniel Akst overview WSJ article here.) Let’s say you are asking a lot of people a question in which the general crowd is misinformed. The answer, on average, will be wrong. There might be a select few in the crowd who really do know the answer, but their voices are downed out, statistically speaking. These researchers took a very clever approach; they ask a follow-on question about what everyone else will answer. The people who really know will often have a very accurate idea of how wrong the crowd will be. So the questions with big disparities can be identified and you can give credit to the informed few while ignoring the loud noise from the crowd.

Very cool. That’s how you can squeeze out knowledge and wisdom from a noisy crowd of less-than-informed people.

The question begs to be asked, however: Why not simply ask the respondents how certain they are? Or, maybe, ask the people of Pennsylvania what their state capital is, not the other 49 states who will generally get it wrong. Maybe even put some money on it to add a little incentive for true positives combined with costly incorrect answers such that only the crazy or the informed will “bet the farm” on answers where they are not absolutely positive?

But then, that too is another study.

Now, to return to scenario planning. Usually with scenario planning, you would have people that are already well informed. However, broad problems have different silos of expertise. Maybe a degree of comfort or confidence would be possible in the process of scenario creation. Areas where a specific participant feels more confident might get more weight than other areas where their confidence is lower. Hmm… Sounds like something that could be done very well with Delphi, provided there were well informed people to poll.

Note scenarios are different from probabilities… Often scenarios are not high probabilities… You are usually looking at possible scenarios that are viable… The “base case” scenario is what goes into the business plan so that may be the 50% scenario; but all the other scenarios are everything else. The base case is only really likely to occur if nothing major changes in the macro and the micro economic world. Changes always happen, but the question is, does the change “signal” that the bus has left the freeway, and now new scenario(s) are at play.

The average recession occurs every 7 years into a recovery. We are about 10 years into recovery from the Great Recession. Of course, many of the Trump factors could be massively disrupting. Not to name them all, but on the most positive case, a 4% to 5% economic growth in the USA, should be a scenario that every business should be considering. (A strengthening US and world economy may, or may not, be directly caused by Trump.) The nice thing about having sound scenario planning, as new “triggers” arise, they may (should) lead directly into existing scenarios.

Having no scenario planning in your business plan… now that seems like a very bad plan.

Reference

Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2. Research methodology (2nd ed., pp. 3-28). Las Vegas, NV: The Lentz Leadership Institute. (www.RefractiveThinker.com)

Hall, E. (2010). Innovation out of turbulence: Scenario and survival plans that utilizes groups and the wisdom of crowds. In C. A. Lentz (Ed.), The refractive thinker: Vol. 5. Strategy in innovation (5th ed., pp. 1-30). Las Vegas, NV: The Lentz Leadership Institute. (www.RefractiveThinker.com)

Prelec, D., Seung, H. S., & McCoy, J. (2017, January 26). A solution to the single-question crowd wisdom problem. Nature. 541(7638), 532-535. 10.1038/nature21054 Retrieved from: http://www.nature.com/nature/journal/v541/n7638/full/nature21054.html

Making sure the IRS Preparers are Prepared… Backcasting & Learning Theory

Dr Dave Schrader recently (December 2016) completed a very cool dissertation pertaining to the IRS and their (in)ability to assess tax preparers’ competency, and their (in)ability to test the preparers’ preparedness. {Sure, that’s easy for you to say!}

Over that last few years, the IRS has been charged with determining Tax Preparers’ competency. (Not the CPAs, mind you, but the millions of — shall we say — undocumented tax preparers.) The problem was that the IRS had not really determined what the preparers should know, before trying to test that they knew it.

Just as the IRS was starting to launch a “testing” of competencies, the civil courts forced Congress to pull the rug out. Another year or so has passed since a volunteer compliance program is in place…  Still no uniform requirements as to what those preparers should know in order to be prepared for the tests. But most importantly, now it’s not just tests, even if they start up again. With the change in Federal Law governing competency, tax preparers must be competent every single time they sign their name to a tax return. No matter how complicated!

What could go wrong with this? ! 🙂

So, Dave’s challenge was to do a dissertation into this murky quagmire. He found out the requirements, what they should know (generally), how they should learn it, and how competency should be assessed. As an afterthought, he tied this all into learning theory. To frame the skill identification, development, and assessment model, he tied the process into a construct for an effective total learning system.

If the dissertation sounds busy, that’s why. Lots of tables and charts to guide the reader through the mundane and the details.

Anyone teaching accounting should be interested in this dissertation. The management within the IRS should be calling Dr Dave in to assist with their Preparer Preparedness Program!.

From an Human Resources (HR) or management perspective, this is a very cool study. First is the skills needed. It works backwards from the skills needed to how and where to develop those skills: education, on-the-job training, or job experience. This is most of the way to “HR backcasting” for developing the skills needed for future jobs. Although backcasting is often used pertaining to economic development, the method, by necessity, must consider skills of the workers for those future jobs.

Can’t wait for the articles that will come out of this dissertation by this accountant (Accredited Accountant, Tax Preparer, and Advisor), teacher and newly minted Doctor.

References

Schrader, David M. (2016). Modified Delphi investigation of core competencies for tax preparers. D.B.A. dissertation, University of Phoenix, Arizona. Dissertations & Theses @ University of Phoenix database.

Cloud Computing in the HR World

Here is a great Delphi dissertation from Dr. Tracy Celaya in 2015 entitled: CLOUD-BASED COMPUTING AND HUMAN RESOURCE MANAGEMENT PERFORMANCE:  A DELPHI STUDY

The dissertation looked at the adoption of cloud-based computing in the IT functions of HR. Specifically it addressed the adoption of cloud technologies. Very cool research related to the adoption of IT in HR and the management of the whole move of HR into the next generation of technologies.

Want to know best HRM practices? Want to know why HR is so slow to adapt to cloud technologies? This dissertation is for you. Look for articles on this topic coming out soon from the new Dr. Celaya. !:-)

Here is the abstract:

The purpose of this qualitative study with a modified Delphi research design was to understand the reasons human resource (HR) leaders are slow to implement Cloud-based technologies and potentially identify how Cloud-Based Computing influences human resource management (HRM) and HR effectiveness, and potentially the overall performance of the organization.  Business executives and HR leaders acknowledge the effect of technology on business processes and strategies, and the leader’s influence on technology implementation and adoption.  Cloud-Based Computing is fast becoming the standard for conducting HR processes and HR leaders must be prepared to implement the change effectively.  Study findings revealed characteristics demonstrated by HR leaders successfully implementing cloud technology, best practices for successful implementation, factors championing and challenging Cloud-Based Computing adoption, and perceived effects on HRM and organizational performance as a result of using Cloud-Based Computing.  The outcomes of this study may provide the foundation of a model for implementing Cloud-Based Computing, a leadership model including characteristics of technology early adopters in HR, and identify factors impeding adoption and may assist HR leaders in creating effective change management strategies for adopting and implementing Cloud-Based Computing.  Findings and recommendation from this study will enable HR professionals and leaders to make informed decision on the adoption of Cloud-Based Computing and improve the effectiveness, efficiency, and strategic capability of HR.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén