AI, The Princess Bride & the Classic Blunders of Foresight

A version of this article was originally published in Research-Technology Management

In the classic movie The Princess Bride, the evil genius exclaims to the hero that he’s fallen victim to one of the classic blunders. “The first of which,” he continues, “is never get involved in a land war in Asia.” Given ChatGPT’s rise in the last year, it is worth exploring the classic blunders of foresight through the lens of applying AI. While AI is incredibly powerful in speeding up the acquisition and use of knowledge, it still requires a professional to understand where and how it should be applied.

Anchoring on the Past to Forecast One Future

The largest blunder we make in foresight is anchoring on the known past to make singular forecasts of the future. As part of an overall foresight competency, a deep understanding of historical patterns is essential. But too often these established ways of knowing the past create rigid worldviews that discount or actively warp new information about the external environment. We do not have a practice of regularly updating our assumptions about the future. So it is critical that we continually question our assumptions and seek out emerging trends that may be outside our worldviews. Otherwise, when asked to forecast, our brains will anchor on where we have the most knowledge, which is often on things that have been around longer. This skews human forecasts to be overly dependent on what is known. We become surprised by “black swans” that in hindsight are easily seen from more recent, disregarded events that do not fit our model for understanding the world.

One of the first temptations of users of AI large language models (LLMs) is to forecast the future. With access to immense datasets, ChatGPT and other LLMs have access to more data than a single human or team of humans could read in a lifetime. Using all that data to make individual predictions of the future seems a natural fit with its capabilities. However, the data that these algorithms are trained on are not current. At some point the designers have to stop loading information in and start training the algorithm to make sense of the data. LLMs are always looking into the past, and things that have been around for a longer period of time have more data points associated with them. This makes LLMs fall victim to the same common logic error of anchoring with an over-emphasis on the past versus what is happening in the present. A short example is that according to GPT4, OpenAI’s most advanced system, the pandemic is still happening. Asking it to forecast commercial real estate markets or retail environments in the future will be warped by a huge outlier of the pandemic. While both commercial real estate and retail continue to be down post-pandemic, occupancy and sales are higher than when the dataset closed off in August 2021.

Just like with human forecasting techniques, this error is correctable. Loading research on current, emerging events and trends in the query about a forecast can ensure the LLM incorporates new information into the model. While it cannot change the weighting and logic the LLM has used to construct the answer, it can help update the assumptions about the future that go into a forecast. Asking it to consider alternate, less probable outcomes is also helpful to widen the scope of forecasts or scenarios to include less weighted, more recent trends and events.

Outsourcing the Process

Futurist Dr. Jim Dator’s second law states that “any useful idea about the future should at first seem ridiculous”. Its antecedent is the myth of Cassandra, who was cursed to be able to foretell the future, but never to be believed. The point is that since humans cannot predict the future, it will be different from our current assumptions. New information about the future is hard to accept, and even harder to act upon, because it challenges these assumptions. Having leaders participate in the process of foresight that leads to these challenging, new ideas can alleviate the immediate rejection or trivialization of important information about the future.

The application of LLMs in business is falling victim to the same problem. Leaders are not able to take part in the building of the database or the training of the algorithm. The LLM returns an answer with no references to how it got the answer or the veracity of the content. If the LLM returns challenging information about the future, it is highly unlikely that leaders will act on it. Incorporating the use of LLMs into the facilitation plan or decision-making process is an important step to ensure that leaders at least request more research before rejecting the results if they go against their assumptions.

Foresight as a Profession

While companies are increasing their internal foresight capabilities, many are staffed or led by people inside the company with little or no foresight training. This creates an over-reliance on trends rather than alternative scenarios, the need to return short-term implications over longer-term market shifts, and a filtering out of the more improbable or less actionable information. Most importantly, there is a dearth of tools to facilitate foresight discussions and less experience understanding which should be used at what time to elicit the best results. Avoiding the Cassandra Curse requires a human touch to speak truth to power in safe ways. Delivering a report without understanding the context in which it will be used can derail any hopes of it being received appropriately. Applying a methodology for decision-making about the future without knowing how it will impact the discussion can be disastrous. I am part of a long list of futurists who have learned the hard way from years of experience and schooling.

The early days of AI saw the same problem. Due to the cost and time of building databases and training LLMs, companies outsourced it to third-party providers. Often these third-party providers were technology companies with little knowledge of their client’s businesses or markets. More recently a new field of “query engineering” has sprung up to even take asking the questions out of the hands of managers.

A recent example that combines AI and foresight is a useful illustration. I use a workshop process that I have developed over 20 years. Every few years I bring in a new tool or lens to the process to continually evolve it or target it to a specific client need. After alternative scenarios are constructed, teams imagine a customer’s day in that future. This creates empathy with the future environment for an ideation exercise using design thinking and jobs-to-be-done (JTBD). Ideas are instantiated in design fictions—drawings, LEGO creations, or physical prototypes of a future product or service. This workshop works and can be counted on to yield many new JTBD and innovation ideas, as well as communicate those effectively to people not in the session through design fiction.

We decided with the client team to experiment with introducing AI to the workshop in three places. The first was replacing the relatively short four or five personas with much richer personas created by ChatGPT using 12 demographic parameters each person could pick from. These rich text personas generated by the LLM were then augmented by a photorealistic picture of them using Midjourney, a text-to-image LLM. And finally, the top innovations from the ideation session were also illustrated by Midjourney, rather than constructed by participants.

While the results of each of these interventions into the process produced a much better deliverable for that step of the workshop in far less time, the power of the results swamped the rest of the process and made the final innovations less effective. Participants anchored on the personas because they were so detailed and realistic, and less on the differences of the future scenario they were exploring. As a result, less JTBD were generated. The ideas illustrated by Midjourney were beautiful, but participants were unable to connect with them in ways the human-made prototypes have done in the past. In short, while the products of the AI interventions were better, it made the overall workshop worse. I had to host a follow-up session to generate more JTBD and connections to innovations. Luckily the work was for an innovation team that wanted to experiment and knew failure could be a part of the process. Moving forward as a professional I have a more nuanced understanding of when and how to deploy these new LLM tools to foresight and innovation.

Keep the Professionals in the Loop

I experimented with these tools in a controlled environment with people who understood the risks of failure. I have recognized the lessons as a professional on appropriate uses of LLMs in workshop settings and will apply them when designing the next one. This should be a cautionary tale, however, for the application of LLMs in business. Right now people are applying LLMs to processes all over the company far less thoughtfully. Many may be employing external specialists who know the LLM technology but do not know the company’s business, or are trying to replace human expertise with automation. Despite all the automation LLMs provide, the organization still needs people who can bring their professional knowledge to these experiments to make sure that, while incredibly powerful, these LLMs are not making the end results of their processes less effective.

 

Previous
Previous

A Glimpse at the State of Space

Next
Next

New Foresight Advisory Services for a Changing World