Living with uncertainty: be ready to be wrong
Photo credit: Sarah Kilian via Unsplash
Are you ready to be wrong?
Let’s imagine that you have followed the first seven steps in this article about becoming an enterprise AI leader (and maybe even attended the course to be launched later this year). You have learnt the fundamentals of computing, examined and adapted your leadership style, developed your leadership team, figured out your values and ways to stick to them, built a robust supply chain, re-imagined your enterprise and found a practical route to implementation.
And then everything changes. A leading AI company releases a whole new category of model which promises capabilities beyond anything you have seen before. Or the same company burns through its investors’ money, fails to raise new funds and the models you rely on vanish from the market. Or powerful, highly optimised new Open Source models are released which collapse the price of training and inference. Or companies come under pressure to recoup their capital investments and prices shoot up. Or a new form of AI comes to market, but is only available through startups with whom you have no relationship. Or the startup with whom you have built a partnership is acquired by a large, traditional technology provider which just wants licence fees and consumption commitments.
All of these outcomes and many more are possible. The field of AI seems to lurch and shift every day, as do the reactions of those attempting to follow it. In the face of such uncertainty, it might seem that the rational thing to do is to abandon all pretence of a long term strategy, and simply react to events. However, that would be a mistake.
Fortunately, AI is not the first wave of disruptive innovation to break over the world of computing. The very invention of computing was itself a disruptive innovation. There have been many more since: the personal computer, the Internet, e-commerce, mobile, cloud and so on. And, despite some exuberant claims that AI is somehow different in kind from all other forms of technology innovation (a claim that has been made for most other forms of technology innovation), I think that there are valuable lessons that we can learn from these previous waves of disruption.
These lessons include the ways that markets explode and consolidate in the face of new opportunities, the levels of patience or impatience that investors show when looking for a return on their money, and the pace at which traditional industries and enterprises can absorb change. At a more fundamental level, though, I think that the most important lesson is that, if you are going to try to make sense of innovative technology and integrate it into your enterprise, you have to be ready to be wrong.
This might sound obvious: we are, after all, dealing with a phenomenon which is difficult to predict, and which everybody will get wrong in some sense (however confident the pronouncements of AI commentators, vendors and researchers sound). But being ready to be wrong has deeper implications for an enterprise AI leader. The consequences of their errors are not just bruised egos, trashed opinions and a clutch of embarrassing posts on social media: they show up in the viability of their organisation, the prospects of their employees and the well-being of their customers. This means that they need to be ready to be wrong in two particular ways.
First, they need to show the humility to accept when they are wrong. This starts at the point of strategy setting and decision making when, even if they have invested in their own education and understanding, enterprise AI leaders need to seek expert help, and to listen when that expertise contradicts their expectations. And it continues into the execution of the strategy, when leaders must be prepared to acknowledge if that strategy has been invalidated by events they could not foresee. Changing strategies is painful, but continuing with doomed strategies is even more so.
Second, they need to take pragmatic steps to hedge against their own error. This might mean making investments which make sense in all circumstances: for example, getting their organisation’s data in order will be valuable, whatever course AI takes. Or it might mean preserving flexibility when there is no value in closing down options: for example, maintaining relationships with multiple strategic partners rather than betting on a single company. Each of these steps takes effort and attention, but each reduces the cost of being wrong later.
It is, of course, possible to go wrong by being too ready to be wrong. Leaders should seek expert advice, but should also expect that advice will be contradictory and confusing: they cannot follow it all. And they should not seek to be so flexible that they achieve a jelly-like indecision, trying to keep all paths open at once.
None of that is easy, but I hope that anyone aspiring to be an enterprise AI leader knows that it is going to be hard.