Reasoning and reflex
Photo credit: Anthony Duran via Unsplash
I didn’t learn to drive a car until I was in my mid-twenties, and I found it difficult. It took me three attempts to pass my test, and a lot longer before I was a confident driver. But, as well as teaching me how to drive, the experience taught me the difference between reasoning and reflex: the difference between knowing what you should do, and doing it automatically due to muscle memory.
Later, I learnt the management lesson that, when we acquire new skills, especially those with a physical component, we pass through phases of unconscious incompetence (we don’t know what to do), conscious incompetence (we know what to do but we can’t do it), conscious competence (we know what to do, and we can do it when we think about it), and unconscious competence (we know what to do, and we do it automatically).
I think that this model of learning can help us think about AI and when to use it. One of the temptations with AI is to let it become a golden hammer: the answer to every problem. Need to write an email? AI. Need to plan a meeting? AI. Need to build some code? AI. Need to check eligibility for a mortgage? AI. And so on.
When AI models have the ability to respond to natural language, and natural language is our medium of thought and discourse, it is hard to think of any problem that we could not describe in a way where AI could give us a response. It seems that all our work could become the work of describing tasks in a form which we can feed into AI models.
And yet, I think we should remember the difference between reasoning and reflex.
Computing has been a discipline of compromise since its earliest days. We have had to choose between accuracy and speed, between ease and performance, between fidelity and storage. Every generation of architects, designers and operators has had to accept that they cannot have everything, everywhere, all at once – they have to make choices, and those are usually choices about resources.
Even though many AI models are hidden behind layers of abstraction, they still consume resources: power, money and time. Their inaccuracies also consume the resources required to detect, correct and repair them. Making a call to an AI model is significantly more expensive, in all these dimensions, than running some lines of standard procedural code.
That means that, as we learn to design and build systems which incorporate AI, we will need to figure out how to distinguish between those times when it’s worth invoking a model, and those times when it’s best to just write some code. For example, in the world of banking, when processing a mortgage application, it is probably better to engage with a customer in natural language, using a model, than forcing them to squeeze their needs and hopes into a structured form. But when calculating interest payments, it is better to simply perform a calculation using traditional program, than asking a model to work out how to do arithmetic.
This optimisation won’t just apply to the use of AI models: we will also have to decide where human reasoning adds most value to the lifecycle. Just as calling an AI model is more expensive than calling some procedural code, human thought is more expensive - and more valuable - than automated processing. It will become part of our jobs to determine where it is best applied.
Whenever we invoke a model, whenever we ask a human to do some thinking, we are choosing to incur the cost and get the benefits of reasoning - or, at least, something that looks like reasoning from the outside. Sometimes this is the right thing to do: when we are solving problems, or when we need to connect with another person. But sometimes it is better to rely on reflex - physical or mental muscle memory which we have learnt, internalised or made automatic - or classical procedural instructions encoded into binary and executed at the speed of electrical impulses.
We are still learning how to use AI. Part of the journey will be making decisions between conscious competence and unconscious competence, and the techniques we use to achieve each - all while attempting to consign all forms of incompetence to the development and testing cycle.