Scenario Plans (& Delphi Research)

I looked into the future, and the time to act is now.

Category: Delphi Method

Qubit

The Future of Computing Is Taking on a Life of Its Own

Previously, we talked about the Tic-Toc of computing at Intel, and how Gordan’s law (Moore’s law) of computing – 18 months to double speed (and halve price) – is starting to hit a brick wall (Outa Time, the tic-toc of Intel and modern computing). Breaking through 14 nanometer barrier is a physical limitation inherent in silicon chips that will be hard to surpass. Ed Jordan’s dissertation addressed this limit and his Delphi study showed what the next technology might likely be, and how soon it might be viable. His study found that several technologies were looming on the horizon (likely less than 50 years)… and that organic (i.e. proteins) was the most promising, and should certainly happen sometime in the next 30 years.

Apparently quantum computing technology is here and now– kinda – especially at Google. See Nicas (2017) WSJ article about Quantum computing in the Future of Computing. As the article states about the expert Nevens, he’s pretty certain that no one understands quantum physics. At the atomic level, a qubit can be both on and off, at the same time. The conversation goes into parallel universes and such… Both here and there, simultaneously. The Quantum computer is run in zero gravity, at absolute zero temperature (give or take a fraction of a degree). Storage density using qubits is unimaginable. The computer works completely differently, however, based on elimination of the non-feasible to arrive at good answers, but not necessarily the best answer. Heuristics, kinda. The error rate is humongous, apparently, requiring maybe 100 qubits in error correction associated with a single working qubit.

Ed Jordan was reminiscing about quantum computing yesterday… “Basically, all computing in all its permutations need to be rethunk. Quantum computing is sort of the Holy Grail. One could argue it is sort of like control fusion: always just 10 years away. Ten years ago, it was 10 years away. Ten years from now it may still be ten years away. There is a truck load of money being thrown at it. But there isn’t anything mature enough yet to do anything that looks like real computing. The problem is how do you read out the results? Like Schrödinger’s cat, that qubit could be alive or dead, and by looking at it you cause different results to happen – as opposed to something that exists independent of your observation.”

Quantum computing is now moving past the technically impossible into the proved and functional, and maybe soon to be viable. The players in this market are Google (Alphabet), IBM and apparently the NSA (if whistle blower Snowden is to be believed.)

Intel may not be able to capitalize on the next generation of computing.  Some computations, such as breaking encryption, can probably be done in a couple seconds on a quantum computer, even though it might take multiple current silicone computers a lifetime. There are several potential uses of the quantum computer that make businesses and security targets very nervous.

Jordan and Hall (2016) talk about using Delphi to anticipate deflection points that are possible on the horizon, including those scenarios that would be possible via quantum computing, or bio-computing for that matter. The use of experts or informed people could make the search for such deflection points more evident, and the ability to develop contingency plans more effective.

One of the most interesting things in the Nicas article is a look at the breakthroughs in computing technology, and comparing them to Jordan’s 2010 dissertation. He found that two or three types of technology should likely be feasible within 25 to 40 years and viable in application within about 30 to 50 years. In his case that would be as early as about 2040. Note that the experts discussed by Nicas were pegged to have full application of a quantum computer by about 2026; that is when digital security will take on a whole new level of risk. It also makes you wonder how block-chain (bitcoin) will fare in the new age of supersonic computing.

This seems like a great time to start working of security safeguards that are not anything like the current technology? Can you imagine the return of no-tech or lo-tech? Kinda reminds you of the revival of the old “brick” phones for analog service (in the middle of the everglades).

References

Debnath, S., Linke, N. M., Figgatt, C., Landsman, K. A., Wright, K., & Monroe, C. (2016). Demonstration of a small programmable quantum computer with atomic qubits. Nature, 536(7614), 63–66. doi:10.1038/nature18648

Jordan, Edgar A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. (Doctoral Dissertation). Available from ProQuest dissertations and Theses database. (UMI No. 3442759)

Jordan, E. A., & Hall, E. B. (2016). Group decision making and Integrated Product Teams: An alternative approach using Delphi.  In C. A. Lentz (Ed.), The refractive thinker: Vol. 10. Effective business strategies for the defense sector. (pp. 1-20) Las Vegas, NV: The Refractive Thinker® Press. ISBN #: 978-0-9840054-5-1. Retrieved from: http://refractivethinker.com/chapters/rt-vol-x-ch-1-defense-sector-procurement-planning-a-delphi-augmented-approach-to-group-decision-making/

Nicas, Jack (2017, November/December). Welcome to the quantum age. The future of Computing in Wall Street Journal. Retrieved from: https://www.wsj.com/articles/how-googles-quantum-computer-could-change-the-world-1508158847

Consensus: Let’s agree to look for agreement, not consensus

Most of the hunters (academic researchers) searching for consensus in their Delphi research, are new to the sport. They believe that they must bag really big game or come home empty handed. But we don’t agree. In fact, once you have had a chance to experience Delphi hunting once or twice, your perception of the game changes.

Consensus is a BIG dilemma within Delphi research. However, it is generally an unnecessary consumer of time and energy. The original Delphi Technique used by the RAND Corporation wanted to aim for consensus in many cases. That is, the U.S. government could either enter an nuclear arms race or not; there really was no middle ground.  Consequently, it was counterproductive to build a technique that could not reach consensus.  It became binary: reach consensus and a plan could be recommended to the president; no consensus, and this too was useful, but less helpful, to inform the president. (The knowledge that the experts could not come up with a clear path forward, when exerting a structured assessment process, is also very good to know.)

Consensus. The consensus process – getting teams of experts to think through complex problems and come up with the best solutions – is critical to effective teamwork and to the Delphi process. In most cases, however, it is not necessary – or even desirable – to come up with the one and only best solution. So long as there is no confusion about the facts and the issues, forcing a consensus when there is none is counter-productive (Fink, Kosecoff, Chassin & Brook, 1984; Hall, 2009, pp. 20-21).

Table 1 shows the general characteristics of various types of nominal group study techniques (Hall & Jordan, 2013, p. 106). Note that the so called traditional Delphi Technique and the UCLA-RAND appropriateness approaches aim for consensus. The so call Modified Delphi might not search for consensus and might not utilize experts. Researchers use the UCLA-RAND approach extensively to look for the best medial treatment protocol when only limited data is available, relying heavily on the expertise of the doctors involved to suggest – sometimes based on their best and informed guess – what protocol might work best. The doctors can only recommend one protocol. Consensus is needed here. 

(Table reprinted with permission Hall and Jordan (2013), p. 106).

But consensus is rarely needed, although it is usually found, to some degree, in business research, and even in most academic research. For example, the most important factors may be best business practices. Of the total list of 10 to 30 factors, few are MOST important. Often, the second round of Delphi aims to prioritize those qualitative factors identified in round 1. There factors are usually natural separation points between the most important (e.g. 4.5 out of 5), those that are medium important (3 out of 5), and the low importance factors.

Those researchers who are fixated on consensus might spend time, maybe a lot of time, trying to find that often elusive component called consensus. There are usually varying levels of agreement. Five doctors might agree on one single best protocol, but 10 probably won’t, unanimously. Interestingly, as the number of participants increase, the ability to talk statistically significantly about the results increases; however, the likelihood of pure, 100% consensus, diminishes. For example, a very small study of five doctors reaches unanimous consensus; but when it is repeated with 30 doctors, there is only 87% agreement. Obviously, one would prefer the quantitative and statistically significant results from the second study. (Usually you are forecasting with Delphi; 100% agreement implies a degree of certainty in an uncertain future, essentially this can easily result in a misapplication of a very useful planning/research tool.)

This brings us to qualitative Delphi vs. a more quantitative, mixed-method, Delphi. Usually Delphi is considered QUAL for several reasons. It works with a small number of informed, or expert, panelists. It usually gathers qualitative information in round 1. However, the qualitative responses are prioritized and/or ranked and/or correlated in round 2, round 3, etc. If a larger sample of participants results in 30 or more respondents in round 2, then the study probably should be upgraded from a purely qualitative study to mixed-method. That is, if the level of quantitative information gathered in round 2 is sufficient, statistical analysis can be meaningfully applied. Then you would look for statistical results (central tendency, dispersion, and maybe even correlation). You will find a confidence interval for all of your factors, those that are very important (say 8 or higher out of 10, +/- 1.5) and those that aren’t important. In this way, you could find those factors that are both important and statistically more important than other factors: a great time to declare a “consensus” victory.

TIP: Consider using more detailed scales. As 5-point Likert-type scale will not provide the same statistical detail as a 7-point, 10-point or maybe even a ratio 100% scale if it makes sense.

Subsequently, in the big game hunt for consensus, most hunters continue to look for the long-extinct woolly mammoth. Maybe they should “modify” their Delphi game for an easier search for success instead . . .

What do you think?

References

Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2: Research Methodology, (pp. 3-27). Las Vegas, NV: The Refractive Thinker® Press. Retrieved from: http://www.RefractiveThinker.com/

Hall, E. B., & Jordan, E. A. (2013). Strategic and scenario planning using Delphi: Long-term and rapid planning utilizing the genius of crowds. In C. A. Lentz (Ed.), The refractive thinker: Vol. II. Research methodology (3rd ed.). (pp. 103-123) Las Vegas, NV: The Refractive Thinker® Press.

Scenarios Now and the Genius (hidden) within Crowd

It’s been about 10 years since the Great Recession of 2007-2008. (It formally started in December of 2007.) A 2009 McKinsey study showed that CEOs wished that they had done more scenario planning that would have made them more flexible and resilient through the great recession. In a 2011 article, Hall (2011) discusses the genius of crowds and group planning – especially scenario planning.

The Hall article spent a lot of time assessing group collaboration, especially utilizing the power available via the Internet. Wikipedia is one of the greatest collaboration – and most successful – tools of all time. It is a non-profit that invokes millions of volunteers daily to add content and regulate the quality of the facts. In this day of faus news, Wikipedia is a stable island in the turbulent ocean of content. Anyone who has corrections to make to any page (called article) is encouraged to do so. However, the corrections need to fact-based and source rich. Unlike a typical wiki, where anything goes, the quality of content is very tightly controlled.  As new information and research comes out on a topic, Wikipedia articles usually reflect those changes quickly and accurately. Bogus information usually doesn’t make it in, and bias writing is usually flagged. Sources are requested when an unsubstantiated fact is presented.

Okay, that’s one of the best ways to use crowds. People with an active interest – and maybe even a high level of expertise – update the content. But what happens when the crowd is a group of laypeople. Jay Leno made an entire career from the “wisdom” of people on the street when he was out Jay Walking. The lack of general knowledge in many areas is staggering.  Info about the latest scandal or gossip by celebs, on the other hand, might be really well circulated. So how can you gather information from a crowd of people where the crowd may be generally wrong?

It turns out that researchers at MIT and Princeton have figured out how to use statistics to figure out when the crowd is right and when the informed minority is much more accurate (Prelec, Seung & McCoy, 2017).  (See a Daniel Akst overview WSJ article here.) Let’s say you are asking a lot of people a question in which the general crowd is misinformed. The answer, on average, will be wrong. There might be a select few in the crowd who really do know the answer, but their voices are downed out, statistically speaking. These researchers took a very clever approach; they ask a follow-on question about what everyone else will answer. The people who really know will often have a very accurate idea of how wrong the crowd will be. So the questions with big disparities can be identified and you can give credit to the informed few while ignoring the loud noise from the crowd.

Very cool. That’s how you can squeeze out knowledge and wisdom from a noisy crowd of less-than-informed people.

The question begs to be asked, however: Why not simply ask the respondents how certain they are? Or, maybe, ask the people of Pennsylvania what their state capital is, not the other 49 states who will generally get it wrong. Maybe even put some money on it to add a little incentive for true positives combined with costly incorrect answers such that only the crazy or the informed will “bet the farm” on answers where they are not absolutely positive?

But then, that too is another study.

Now, to return to scenario planning. Usually with scenario planning, you would have people that are already well informed. However, broad problems have different silos of expertise. Maybe a degree of comfort or confidence would be possible in the process of scenario creation. Areas where a specific participant feels more confident might get more weight than other areas where their confidence is lower. Hmm… Sounds like something that could be done very well with Delphi, provided there were well informed people to poll.

Note scenarios are different from probabilities… Often scenarios are not high probabilities… You are usually looking at possible scenarios that are viable… The “base case” scenario is what goes into the business plan so that may be the 50% scenario; but all the other scenarios are everything else. The base case is only really likely to occur if nothing major changes in the macro and the micro economic world. Changes always happen, but the question is, does the change “signal” that the bus has left the freeway, and now new scenario(s) are at play.

The average recession occurs every 7 years into a recovery. We are about 10 years into recovery from the Great Recession. Of course, many of the Trump factors could be massively disrupting. Not to name them all, but on the most positive case, a 4% to 5% economic growth in the USA, should be a scenario that every business should be considering. (A strengthening US and world economy may, or may not, be directly caused by Trump.) The nice thing about having sound scenario planning, as new “triggers” arise, they may (should) lead directly into existing scenarios.

Having no scenario planning in your business plan… now that seems like a very bad plan.

Reference

Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2. Research methodology (2nd ed., pp. 3-28). Las Vegas, NV: The Lentz Leadership Institute. (www.RefractiveThinker.com)

Hall, E. (2010). Innovation out of turbulence: Scenario and survival plans that utilizes groups and the wisdom of crowds. In C. A. Lentz (Ed.), The refractive thinker: Vol. 5. Strategy in innovation (5th ed., pp. 1-30). Las Vegas, NV: The Lentz Leadership Institute. (www.RefractiveThinker.com)

Prelec, D., Seung, H. S., & McCoy, J. (2017, January 26). A solution to the single-question crowd wisdom problem. Nature. 541(7638), 532-535. 10.1038/nature21054 Retrieved from: http://www.nature.com/nature/journal/v541/n7638/full/nature21054.html

Making sure the IRS Preparers are Prepared… Backcasting & Learning Theory

Dr Dave Schrader recently (December 2016) completed a very cool dissertation pertaining to the IRS and their (in)ability to assess tax preparers’ competency, and their (in)ability to test the preparers’ preparedness. {Sure, that’s easy for you to say!}

Over that last few years, the IRS has been charged with determining Tax Preparers’ competency. (Not the CPAs, mind you, but the millions of — shall we say — undocumented tax preparers.) The problem was that the IRS had not really determined what the preparers should know, before trying to test that they knew it.

Just as the IRS was starting to launch a “testing” of competencies, the civil courts forced Congress to pull the rug out. Another year or so has passed since a volunteer compliance program is in place…  Still no uniform requirements as to what those preparers should know in order to be prepared for the tests. But most importantly, now it’s not just tests, even if they start up again. With the change in Federal Law governing competency, tax preparers must be competent every single time they sign their name to a tax return. No matter how complicated!

What could go wrong with this? ! 🙂

So, Dave’s challenge was to do a dissertation into this murky quagmire. He found out the requirements, what they should know (generally), how they should learn it, and how competency should be assessed. As an afterthought, he tied this all into learning theory. To frame the skill identification, development, and assessment model, he tied the process into a construct for an effective total learning system.

If the dissertation sounds busy, that’s why. Lots of tables and charts to guide the reader through the mundane and the details.

Anyone teaching accounting should be interested in this dissertation. The management within the IRS should be calling Dr Dave in to assist with their Preparer Preparedness Program!.

From an Human Resources (HR) or management perspective, this is a very cool study. First is the skills needed. It works backwards from the skills needed to how and where to develop those skills: education, on-the-job training, or job experience. This is most of the way to “HR backcasting” for developing the skills needed for future jobs. Although backcasting is often used pertaining to economic development, the method, by necessity, must consider skills of the workers for those future jobs.

Can’t wait for the articles that will come out of this dissertation by this accountant (Accredited Accountant, Tax Preparer, and Advisor), teacher and newly minted Doctor.

References

Schrader, David M. (2016). Modified Delphi investigation of core competencies for tax preparers. D.B.A. dissertation, University of Phoenix, Arizona. Dissertations & Theses @ University of Phoenix database.

Cloud Computing in the HR World

Here is a great Delphi dissertation from Dr. Tracy Celaya in 2015 entitled: CLOUD-BASED COMPUTING AND HUMAN RESOURCE MANAGEMENT PERFORMANCE:  A DELPHI STUDY

The dissertation looked at the adoption of cloud-based computing in the IT functions of HR. Specifically it addressed the adoption of cloud technologies. Very cool research related to the adoption of IT in HR and the management of the whole move of HR into the next generation of technologies.

Want to know best HRM practices? Want to know why HR is so slow to adapt to cloud technologies? This dissertation is for you. Look for articles on this topic coming out soon from the new Dr. Celaya. !:-)

Here is the abstract:

The purpose of this qualitative study with a modified Delphi research design was to understand the reasons human resource (HR) leaders are slow to implement Cloud-based technologies and potentially identify how Cloud-Based Computing influences human resource management (HRM) and HR effectiveness, and potentially the overall performance of the organization.  Business executives and HR leaders acknowledge the effect of technology on business processes and strategies, and the leader’s influence on technology implementation and adoption.  Cloud-Based Computing is fast becoming the standard for conducting HR processes and HR leaders must be prepared to implement the change effectively.  Study findings revealed characteristics demonstrated by HR leaders successfully implementing cloud technology, best practices for successful implementation, factors championing and challenging Cloud-Based Computing adoption, and perceived effects on HRM and organizational performance as a result of using Cloud-Based Computing.  The outcomes of this study may provide the foundation of a model for implementing Cloud-Based Computing, a leadership model including characteristics of technology early adopters in HR, and identify factors impeding adoption and may assist HR leaders in creating effective change management strategies for adopting and implementing Cloud-Based Computing.  Findings and recommendation from this study will enable HR professionals and leaders to make informed decision on the adoption of Cloud-Based Computing and improve the effectiveness, efficiency, and strategic capability of HR.

Outa Time, the tic-toc of Intel and modern computing.

Ed Jordan’s dissertation research looked at the future of computing. He was inspired by the thought that Gordon’s law (Moore’s law) of computing — 18 months to double speed (and halve price) — was about to break down because of the limitations of silicon chips as the go below the 14 manometer level. Since Intel lives and dies based on the silicon chip, his research was really a story into the future. When will the old chip die, and what will be the next technology?

Hall & Jordon discuss the application of this disruptive technology in their DoD procurement planning article in the Refractive Thinker related to the use of Integrated Product Teams.

His research showed that the death of the silicon chip computer would come sooner, not later. And that several options appeared likely including quantum computing.  Scientists have just made a huge breakthrough toward Quantum Computing: see the WSJ article about it here, as published in the journal Nature.

In the meantime, Intel’s approach for decades of hardware one year and software (for the new hardware) the next has broken down. The so-called Tic-Toc of Intel is now outa time. It seems to be more like 2 years (4 years, really) in the clock cycle.

So, will Intel die with the new technologies? Obviously Intel can simply invent the disruptive technologies internally, or buy it up wherever the viable invention wells up.

References

Debnath, S., Linke, N. M., Figgatt, C., Landsman, K. A., Wright, K., & Monroe, C. (2016). Demonstration of a small programmable quantum computer with atomic qubits. Nature, 536(7614), 63–66. doi:10.1038/nature18648

Jordan, Edgar A. (2010). The semiconductor industry and emerging technologies: A study using a modified Delphi Method. (Doctoral Dissertation). Available from ProQuest dissertations and Theses database. (UMI No. 3442759)

Jordan, E. A., & Hall, E. B. (2016). Group decision making and Integrated Product Teams: An alternative approach using Delphi.  In C. A. Lentz (Ed.), The refractive thinker: Vol. 10. Effective business strategies for the defense sector. (pp. 1-20) Las Vegas, NV: The Refractive Thinker® Press. ISBN #: 978-0-9840054-5-1. Retrieved from: http://refractivethinker.com/chapters/rt-vol-x-ch-1-defense-sector-procurement-planning-a-delphi-augmented-approach-to-group-decision-making/

The Conundrum of middle management, HR experts and Delphi research.

Here is the overview on the RefractiveThinker™ article by Lentz in 2009 that discussed some of her findings related to using HR experts in a single-round, quantitative, Delphi Study. See the prior blog discussion related to using a 1-round Quant Delphi method here.

The overview of the 2009 chapter by Lentz, The modified ask-the-experts Delphi method: The conundrum of human resource experts on management participation, is this:

“[The] Lentz Dissertation study … was … a quantitative correlational explanatory method, using a modified Ask-the-Experts Delphi technique to determine if the traditionally held view of the strategic management process where strategic decision making had once been entrusted solely to the organization’s top management was still valid. Historically, only those in senior leadership positions within the executive office were felt to understand and employ strategic literacy in order to possess the skill, knowledge, and expertise to most effectively formulate corporate strategy and make strategic decisions. The purpose of the present study was to extend the foundational work of Wooldridge and Floyd from their 1990 study, using the modified Delphi Technique to look at the significance of additional employee involvement in the strategic decision-making process as it correlates to organizational performance.”

Based on the works in the 1990 by Wooldridge and Floyd, this dissertation was able to skip over round 1 of a typical Delphi Study. She hoped that the HR experts would corroborate the findings of the “Floyd Boyz”, as she called them. Assuming that the prior research was corroborated, then she would feel comfortable extending the research further and obtain better understanding of the involvement of middle management in the strategic planning world.

But, she didn’t get that first round of confirmation in the statistical analysis she was expecting!? Maybe things have changed since 1990? That seems likely. Maybe the HR experts weren’t so expert after all? Hmmm…  Maybe Delphi doesn’t always do what it hopes to do? Hmmm….

Sounds like a conundrum?

In the meanwhile, it seems that middle management gets no respect like the late-great Rodney Dangerfield of strategic planning and decision making.

References

Lentz, C. A. (2007).  Strategic decision making in organizational performance: A quantitative study of employee inclusiveness. D.M. dissertation, University of Phoenix, Arizona. Dissertations & Theses @ University of Phoenix database. (Publication No. AAT 3277192).

Lentz, C. (2009). The modified ask-the-experts Delphi method: The conundrum of human resource experts on management participation. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2: Research Methodology, (pp. 51-75). Las Vegas, NV: The Refractive Thinker® Press. Retrieved from: http://refractivethinker.com/rt-vol-ii/

A single round (1 round) Delphi study. How can that be?

The 2007 Lentz Dissertation by Cheryl Lentz (Dr Cheryl) was interesting in many ways. One is that it was a single round study. What she was doing was following on research that had been conducted previously. So she was able to use the prior research for the information and the factors that she would have needed to gather in round 1 of a full Delphi study. I know, I know. That’s not a Delphi study then if only 1 round. But she chose to still call it a Modified Delphi, study in large part, because of the use of experts.

She recruited HR experts to do the study. The results turned out to be a bit of a conundrum, as she discussed in her 2009 article in the Refractive Thinker. But that’s another post discussion.

There are several ways to change a Delphi study from the classical approach used by RAND back in the cold war era, and therefore to categorize it as a “modified” Delphi study. One way is to use informed people, so that you avoid having to get “experts” to participate. Plus then you would have to justify the criteria used to decide what represents and expert. Another approach is not necessarily to aim for consensus.  Most studies using Delphi don’t need full consensus; often they are aiming for best practices or the most important factors. The UCLA/Delphi approach is used to get medical experts to decide on a single best protocol for treatment (in the absence of conclusive laboratory research). In this case, they do need to aim for consensus. Without enough evidence and experience, no medical protocol should be recommended at all.

So, with all that said, do you think that Dr Lentz’s dissertation with only one round (essentially a second round) should have been categorized as a Delphi study?

A whole different question is, can a Delphi study be mixed method or quantitative? (Most people think of Delphi as Qualitative?)

Hall (2009) presents a table that summarizes the various categories between/among types of nominal studies. See the next blog post here related to the conundrum of the findings in this research study.

Reference

Hall, E. (2009). The Delphi primer: Doing real-world or academic research using a mixed-method approach. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2: Research Methodology, (pp. 3-27). Las Vegas, NV: The Refractive Thinker® Press. Retrieved from: http://www.RefractiveThinker.com/

Lentz, C. A. (2007).  Strategic decision making in organizational performance: A quantitative study of employee inclusiveness. D.M. dissertation, University of Phoenix, Arizona. Dissertations & Theses @ University of Phoenix database. (Publication No. AAT 3277192).

Lentz, C. (2009). The modified ask-the-experts Delphi method: The conundrum of human resource experts on management participation. In C. A. Lentz (Ed.), The refractive thinker: Vol. 2: Research Methodology, (pp. 51-75). Las Vegas, NV: The Refractive Thinker® Press. Retrieved from: http://refractivethinker.com/rt-vol-ii/

Delphi in the DoD Procurement Team (IPT) process

Jordan & Hall published an article in the Defense Sector edition (2016, Edition X) of the Refractive Thinker related to augmenting the DoD procurement process with Delphi team planning (Jordan & Hall, 2016). Here is the summary.

Delphi Method, or Delphi Technique, is an established method for bringing teams of informed panelists, or experts, together to analyze complex and interrelated problems. Organizations use group decision-making techniques to make sound plans, plans that gain support for the decisions made and build consensus. The U.S. Department of Defense (DoD) requires the use of Integrated Product Teams (IPTs) to ensure all disciplines are well represented in acquisition decisions. IPT planning process has several limitations, including the biases and inefficiencies associated with face-to-face meetings. The IPT process could be augmented to include Delphi analysis in order to develop more robust and more flexible procurement plans. Using the Delphi Method to augment IPTs could minimizing the costs and limitations of more traditional group planning while also significantly improve the quality of the procurement decisions. Delphi teams could be used with experts (or even with crowds) to provide sound analysis in many situations where the IPT process is ill equipped to produce unbiased and long-term results. Delphi teams would have the ability, as well, to look at bigger picture issues, and thereby avoid the narrow-scope, tunnel-vision analysis where most of the IPTs operate.

Reference

Jordan, E. A., & Hall, E. B. (2016). Group decision making and Integrated Product Teams: An alternative approach using Delphi.  In C. A. Lentz (Ed.), The refractive thinker: Vol. 10. Effective business strategies for the defense sector. (pp. 1-20) Las Vegas, NV: The Refractive Thinker® Press. ISBN #: 978-0-9840054-5-1. Retrieved from: http://refractivethinker.com/chapters/rt-vol-x-ch-1-defense-sector-procurement-planning-a-delphi-augmented-approach-to-group-decision-making/

The Volume 10 book: http://refractivethinker.com/books/the-refractive-thinker-vol-x-effective-business-strategies-for-the-defense-industry-sector/

Using Delphi Method for planning… including Scenario Planning and Horizon Plans.

This site discusses and tracks the use of Delphi-type methods in doing all kinds of research: academic, theoretical and real-world. Businesses can use the Delphi method to identify key issues, develop scenario plans and/or do horizon planning.

Strategic Business Planning that use similar methods as those used by Delphi Method. A strategic planning workshop for strat plan development uses a modified SWOT planning situational analysis method, for example. But the Delphi Method works best for horizon planning, future new product planning and scenario planning. We like to integrate disaster recovery planning (business continuity planning) into the scenario planning process.

Strategic Planning company (Hall, Hinkelman and associates) have done research and publishing on scenario planning and Delphi Method research.

Find these articles/books at:

  • SBPs Storefront at LuLu Press: LuLu.com/spotlight/SBPlan (Chapter 8 of the Guide 2.0 as well as the Economic Development Plan.)
  • Refractive Thinker(r) RefractiveThinker.com (Look for articles/chapters on Delphi research including the Delphi Primer.)

ScenarioPlanningTimeline

We like to look for that future deflection point were it would be clear to everyone, including the dog, “Toto, I’ve a feeling we’re not in Kansas anymore.”

Powered by WordPress & Theme by Anders Norén