PTW/PI All-Stars Book Club – Chapter Three
Strategy & Artificial Intelligence
Welcome to Chapter Three of the Playing to Win/Practitioner Insights (PTW/PI) book club. Again, in the spirit of a book club discussion, I will do my best to respond to all comments, which I have done for all comments thus far – of which there were numerous for Chapter Two. Of the 37 all-star pieces out of the 260 in the series, the randomizer picked as third chapter Strategy & Artificial Intelligence (AI). Like the first two, this one is from the second biggest category – help on understanding the context for strategy. I promise that the next several will be from other categories! You can find the whole PTW/PI series here.
My Reflections
This was the first of my five pieces in the five years of PTW/PI on AI (the other four are Investment Strategy and AI, Will AI Eradicate Practitioners of Strategy, Strategy, Artificial Intelligence & Entry Level Hires, and A Leader’s Role in Fostering AI Superpowers). There will be more going forward. I am not trying to position myself as an expert on AI. For that, you have to be 100% all-in working on AI – and I am not. Rather, I am working to create a context for understanding AI in relation to strategy.
The Knowledge Funnel
This piece attempted to set a foundation for understanding AI in the context of knowledge development, and it argued that to understand the role of AI, one must understand how knowledge is built. To do so, I referred to the concept of a knowledge funnel, which I introduced nearly two decades ago in 2009 in my book The Design of Business.
To borrow a paragraph from the original piece:
“The book posits (as shown above) that all knowledge in our world goes through three stages. Every domain starts as a mystery, at which point we don’t even know how to think about the subject. For example, at one point in history, we had no idea why objects fall to the ground — fate, the love of all objects for mother earth, animal spirits, etc. Some mysteries get advanced to heuristics — a way of thinking about it that helps get towards a valuable answer. In due course, a brilliant person posited that there is a universal force (gravity) that pushes all objects towards the ground, some more successfully (rocks) than others (birds). Some heuristics get advanced to algorithms — a formula that consistently produces the desired result. Objects accelerate toward the ground at 9.8 m/s2.”
AI is best seen as a device for advancing knowledge domains through the Knowledge Funnel more quickly than would happen without it.
When it comes to AI, the heuristic domain is critically important. Talented people have a heuristic in their domain that makes them valuable – a deal-doing heuristic, a song-writing heuristic, a strategy heuristic, etc. They have stared into a mystery and created a heuristic.
That is what I did in strategy. People knew that having a strategy was important but lacked (at least to a great extent) a heuristic for getting from a strategy they didn’t find satisfactory to one that they did. That was a mystery to them. In the face of that mystery, I developed two heuristics – the Strategy Choice Cascade and the Strategic Choice Structuring Process. Thankfully, users seem to value being provided a heuristic to guide them, rather than thrashing around in a mystery. However, to the disappointment of some, neither is an algorithm. To run the heuristic well, it takes both judgement and experience.
The Role of AI
With the Knowledge Funnel as my organizing construct, I see AI as a tool for moving knowledge faster and more efficiently through the funnel. That is valuable because the farther knowledge gets along the funnel, the more efficient it is to deal with it. Mysteries are extremely expensive to solve because we don’t yet know how to think about the topic, and hence we have to consider all possibilities, features and variables. That is why so many mysteries are tackled in non-profits, whether universities or government-sponsored research centers.
When a mystery is pushed to a heuristic, we have a methodology for thinking through the features, and only those that have been shown to be important. However, heuristics have to be run by experienced, skilled, and expensive talent. Algorithms can be run by less-experienced human assets – or, in the modern world by a digital computer and/or AI. So, when AI can push knowledge through the funnel faster, it adds value.
On this front, AI can take a mystery and crunch through it to potentially – though not certainly – find a heuristic. An early AI example of this is the Netflix movie suggestion engine. It takes everything you have watched and commented on and crunches through that data to find a heuristic – movies that you are much more likely to enjoy than a movie you might choose otherwise. Notice that it is a heuristic, not an algorithm. It says that you are likely to enjoy the recommendation, not that you are guaranteed to love it. It will find at scale, and more quickly than any human, the features of a movie that would make it more likely to be something you will enjoy. The success of Netflix and this feature is testament to the value of using AI to convert a mystery to a heuristic.
But AI can also take a heuristic and push it to an algorithm. The traffic app Waze exemplifies this. All work commuters have their favorite route to and from work and that is their algorithm with zero traffic. They can deduce that algorithm with a modest amount of experimentation. However, when there appears to be traffic backup due to construction, accidents or unusually high volume, they have heuristics – try this or that. They don’t have an algorithm that ensures that they get there the fastest given the vagaries of traffic patterns on a given day and time, just their best guess based on experience. But Waze has an algorithm. With real-time data and AI, Waze provides an algorithm to the driver for the optimal route – at that precise point in time.
The Dark Side of AI
While converting mysteries to heuristics and pushing heuristics to algorithms are extremely valuable functions for AI, there is a dark side, which stems from the process AI uses to infer the heuristic or algorithm. Frequency is AI’s metric. To provide a requested answer (to a prompt), AI doesn’t seek the most outstanding occurrence. It seeks the most frequent occurrence. That is, if you specify a set of strategic circumstances and ask AI what the best strategy in that particular situation is, you will get the choice that is most frequently mentioned as an excellent strategy, not the best strategy. It has no way to ascertain the best strategy – other than the modal answer.
So, if you have a question for which the desired answer is the modal or mean or median answer, AI will get it for you quicker, cheaper and more accurately than a human means. However, if you have a question for which the desired answer is far out on the right tail of the distribution, AI won’t even attempt to supply it.
Stated another way, AI has a natural Where-to-Play/How-to-Win (WTP/HTW). The key is to use it in its natural WTP so that you can benefit from its strong HTW there. But if you utilize it outside its WTP, you will be sorely disappointed – though that disappointment may take a while to become evident to you.
Reader Reaction
Reader reaction to the piece was quite extensive and mainly positive – consistent with its position as an all-star. The greatest interest was in the mode/mean versus tail point on AI capability. It seemed to resonate with readers and help them think about optimal AI use – that made me happy.
The other issue that generated conversation was my admonition to develop a personal heuristic. I have made that argument long before AI became a thing. It is one of the two greatest generators of personal value in the world. How much value it generates depends on the level of capital applied on top of it. Private equity is the greatest example. If you have a heuristic for picking undervalued companies and turning them around, and you get limited partners to put billions of dollars behind your heuristic, you will become a billionaire.
The other premiere value creator is turning a mystery into a heuristic. If the mystery is “how are people going to want to communicate in the Internet age” and you solve it with the answer “social media that looks like (say) Facebook,” you get to make $100+ billion – with or without AI.
But in the age of AI, if you personally don’t have a valuable personal heuristic, you will be replaced by AI – full stop. Because of that strong view, readers were interested in advice on developing a personal heuristic. That is something to which I might turn my attention in a future original piece.
For those readers who have enjoyed this and the other AI-themed PTW/PI pieces, you will be happy to know that next week, I will be taking a one-week break from the All-Star series to publish my first Year VI original PTW/PI – on the AI Augmented Enterprise. It will be the third piece with my colleagues Ahmad Zaidi and Gui Loureiro, with whom I did Year V pieces Strategy, Artificial Intelligence & Entry Level Hires and A Leader’s Role in Fostering AI Superpowers. Look for our new piece next Monday!
Without further ado, the original article…
Chapter Three
Companies are spending wildly on Artificial Intelligence (AI), understandably because AI has already impacted business plenty and, arguably, is only getting started doing so. In response, I have decided to write a two-piece series on AI. The first Playing to Win/Practitioner Insights (PTW/PI) piece will discuss a way to conceptualize the role of AI in business and strategy, and the second will suggest how to think about company investments in AI. This one is Strategy & Artificial Intelligence: A Story of Heuristics, Means, and Tails.
AI & the Progress of Knowledge
I will focus on the AI manifestation that has gotten businesspeople simultaneously giddy and worried, which is large language model-based generative AI (Let’s call it LLM/AI) — as epitomized by, though certainly not limited to, ChatGPT. There are, of course, many types of AI, some older, some newer, some yet to come. But I will let others discuss AI classification systems. And bear with me, I need to go on a longer-than-usual detour to get to the way to think about LLM/AI.



In reading through the original piece and thinking about the progress of knowledge from Mystery —> Heuristic —> Algorithm, and Specific vs. General Knowledge distinction, I’m reminded of what I uncovered reverse-engineering Gong’s strategy across the five boxes of the Strategy Choice Cascade: https://michaelgoitein.substack.com/p/how-gong-created-a-725b-opportunity
Gong’s breakthrough “Where to Play” was to target Revenue Leaders, a category they created and continue to dominate. They managed to turn the heuristics of great sales into a coachable algorithm.
The other piece that’s interesting about Gong is the compounding nature of sales performance date. Revenue insights not only improve the performance of a company’s entire sales team, those insights get abstracted and back fed into improving Gong’s platform-level algorithm.
That’s a compounding competitive advantage that passes the “Can’t/Won’t” test. Competitors can’t easily recreate Gong’s compounding data set, and won’t give up focusing on improving individual sales rep performance.
The good news is that AI will be very helpful to prepare strategic plans (e.g., populating in a few seconds a vast range of frameworks, starting of course with a SWOT matrix).
The bad news is that if strategic planning will be faster and cheaper it'll be as useless tomorrow as it is today :-(
When you digitize a shit** process, you end up with a digitzed shit** process (old process redesign motto)