I’m David Knott. I’ve been working in enterprise technology for over forty years and I’m still learning. This blog is based on mistakes, failures, lessons and some things I find interesting:
A few phrases to help resist AI illusions
‘It can’t hurt you: it’s not real!’
You might hear those words from the hero of a horror, science fiction or fantasy film. They could be walking through a dream world, subject to a hallucinogenic drug, or under the spell of a sorcerer. They know that the things that they are seeing are not real, and that all they have to do is to try to ignore what they think they can see and hear. Telling themselves that what they are experiencing is not real is a guard against fear, against stepping off the path, or, worst of all, the temptation to talk back to the illusions.
Dealing with current forms of AI can feel like this. Not just because AI is surrounded by hype, marketing, inflated expectations and a big dose of FOMO. And not just because AI can be used to produce fake videos, fake images and fake words.
Please pay attention to the safety briefing
What do you do when the air crew ask you to put down your books or devices and pay attention to the safety briefing? Do you follow their advice, because this aircraft may be different to those you have flown on before? Do you study the safety card when the briefing is over? Do you check that you know the location of the life jacket, under your seat or in the compartment next to you? Or do you zone out, diving deeper into the mental limbo that air travel induces, waiting for the moment when you can start reading, scrolling or checking emails again?
I expect that most of us regard the safety briefing as a dull but worthy formality, and don’t pay as much attention as we should. However, I also think that perhaps we should regard it differently, and learn some lessons about how we achieve safety in enterprise technology.
Do your approvals processes make it easier to do nothing than to do something?
Have you ever seen a project plan which is a victim of the approvals process?
You can usually tell when a plan has suffered in this way. There may be long gaps when nothing is happening, followed by frantic activity around a monthly or quarterly date. Or there may be design and planning work which is crammed into the plan far too early, in order to hit an approvals board. There may even be a whole part of the plan – and the team – dedicated to gathering data and writing requests for approval.
Technology people seem to hate approvals and love them at the same time. Nobody enjoys navigating their way through complicated and arcane processes where every signpost says, ‘Not this way,’ or ‘Try again.’ And yet we don’t seem to be able to stop ourselves from creating more processes: approvals to purchase, approvals to hire, approvals to release, approvals to change, and approvals to change the approvals process. I've certainly been guilty of implementing processes which seemed like a good idea at the time, but less so in practice.
Technologists are always crying wolf (because of all the wolves)
The computer had failed. Unfortunately, it was the Apollo Guidance Computer (AGC), the machine that controlled the flight of a small, fragile spacecraft to the Moon and back. Fortunately, it wasn’t in space: it was on the ground, in a simulator.
Margaret Hamilton, the leader of the MIT team programming the AGC, often had to work weekends to meet the urgent schedule of the Apollo programme, and sometimes brought her daughter, Lauren, to work with her. Lauren liked to play in the simulator.
Learn to fail fast? Technologists fail all the time
From time to time, organisations attempt to learn new ways of working. They attempt to become digital or agile or data-driven or innovative. These attempts come with some familiar ideas: that we should execute through cross-functional teams who are empowered to experiment. One of these ideas is that we should not be scared of failure, and that we should learn to fail fast.
These attempts sometimes elicit eye rolls from the technology teams, especially the idea that we should embrace failure. This is not because these ideas are invalid: in fact, they are welcome to technology teams, and reflect their preferred ways of working. However, technologists have a different relationship with failure than non-technologists.
On the 2025 to-do list: figure out AI agents
Recent years have seen waves of AI innovation breaking faster than we can figure out good practice. Organisations around the world are working hard, not only to find ways to put AI to work, but to do so safely and responsibly. The AI to-do list often seems to be growing longer faster than we can strike items off it - but the only route to good practice is practice.
The advent of AI agents promises to add more items to the to-do list. The AI agent wave started cresting in 2024, and will break in 2025. Several major technology vendors and platforms already offer their customers the ability to build, configure and operate AI agents in an enterprise context, and the ability for consumers to build agents or to subscribe to existing agents, cannot be far behind (indeed, it is likely that, by the time this article is published, it will already be happening).
Risk Management: the often overlooked dimension of Digital Transformation
I have to confess that I’m not good at keeping New Year’s resolutions: I find it hard to change my behaviour on the basis of a single decision that happens to coincide with the end of the year. The ubiquity of broken New Year’s resolutions also seems to give me permission to break my own. But this year will be different! I’m going to try something that I hope will not be too hard and will be of some interest: to share some thoughts about Digital Transformation which have been bouncing around my head for a while - and through sharing them, give them form.
As these thoughts form, I expect they will almost fall into the classic people, process and technology triad. The one difference is that, when we talk about Digital Transformation, I think that we should talk about practices rather than processes. This might seem like a fine semantic point but, to me, a process is something that people follow (the process is in charge), whereas a practice is something that people do (the people are in charge). Practices seem to fit a world of autonomy, skill and expertise - people doing the work that cannot be done by machines.
If you’re going to land on the Moon, at some point you have to land on the Moon
The recording of the Apollo 11 Moon mission, just before the Eagle lands in the Sea of Tranquility, never fails to make my hair stand up. The tone of the astronauts is matter of fact, calm and professional: if you didn’t know the context then you would never imagine that they were engaged in one of humanity’s greatest endeavours.
And the more of the context you know, the more astonishing that air of calm becomes. Those last minutes before the landing were filled with alarms, unexpected behaviour from the lunar module, an overshoot of the planned landing site, and a search for a favourable place to land. By the time the module touched down, the craft had less than 5% of its fuel left, and was less than thirty seconds away from abandoning the mission. (This page gives a full transcript of the last 13 minutes before landing, and a great commentary on what was going on behind the words.)