I’m David Knott. I’ve been working in enterprise technology for over forty years and I’m still learning. This blog is based on mistakes, failures, lessons and some things I find interesting:
There's always a bigger goat: don't let big problems stop you solving smaller problems
In the story of the three billy goats gruff, the goats want to cross a bridge guarded by a troll. They manage this by each telling the troll that there is a bigger goat just behind them until (spoiler alert!) the biggest goat comes along and butts the troll into the sky.
Sometimes, when we are trying to make the case for enterprise technology capabilities, it feels like we are the trolls, and that we are so scared of the biggest billy goat that we won’t tackle the smaller goats. When we look across our technology landscapes, we see mess, waste and mayhem, and wish that we had some of the foundational capabilities that would help clean things up. Yet we hesitate, because we know that every time we build something we will uncover another problem, and another problem, and another problem, until we get to problems that are so big that we cannot imagine how to solve them.
Which are more dangerous: slides, or sticky notes?
In the story of the three billy goats gruff, the goats want to cross a bridge guarded by a troll. They manage this by each telling the troll that there is a bigger goat just behind them until (spoiler alert!) the biggest goat comes along and butts the troll into the sky.
Sometimes, when we are trying to make the case for enterprise technology capabilities, it feels like we are the trolls, and that we are so scared of the biggest billy goat that we won’t tackle the smaller goats. When we look across our technology landscapes, we see mess, waste and mayhem, and wish that we had some of the foundational capabilities that would help clean things up. Yet we hesitate, because we know that every time we build something we will uncover another problem, and another problem, and another problem, until we get to problems that are so big that we cannot imagine how to solve them.
The language illusion, doubled
Is programming a computer more like language or more like maths?
Neither, it turns out. In recent research, neuroscientists at MIT conducted brain scans of programmers while they were trying to solve problems, and discovered that, rather than engaging the language centres of the brain, they engaged a system known as the multiple demand network, usually used for complex problem solving.
Programming languages, it seems, are not the same as ordinary languages. This is not new news. In the earliest days of programming, when Grace Hopper was inventing high level languages, she and her team sent versions of their code to their bosses in French and German. The bosses sat up and paid attention: was it possible that their computers had suddenly learnt to speak foreign languages?
Technologists are always crying wolf (because of all the wolves)
The computer had failed. Unfortunately, it was the Apollo Guidance Computer (AGC), the machine that controlled the flight of a small, fragile spacecraft to the Moon and back. Fortunately, it wasn’t in space: it was on the ground, in a simulator.
Margaret Hamilton, the leader of the MIT team programming the AGC, often had to work weekends to meet the urgent schedule of the Apollo programme, and sometimes brought her daughter, Lauren, to work with her. Lauren liked to play in the simulator.
Coping with volatility: don't panic; seek truth; release frequently
If you’re in the last stages of a multi-year digital delivery programme, then you probably feel frazzled. That’s the normal condition of late stage programme teams. If your programme has coincided with the last five years (five year digital delivery programmes are still a thing) then you must feel frazzled to a historic degree.
It’s more complicated on the inside than it is on the outside
We don’t need time machines to create paradoxes in technology: they are built into the way we work. One of these paradoxes is that the simpler technology appears on the outside, the more complicated it is on the inside.
I was reminded of this recently when talking to someone who confidently told me that the more sophisticated AI models get, the easier they will be to use, for technologists as well as end users. AI would solve its own skills problem. I was surprised by this because, to me (and, I expect to most other technologists), while we understand how natural language interfaces can radically simplify the experience for end users, the introduction of the current wave of AI into our architecture makes it more complicated.
Learn to fail fast? Technologists fail all the time
From time to time, organisations attempt to learn new ways of working. They attempt to become digital or agile or data-driven or innovative. These attempts come with some familiar ideas: that we should execute through cross-functional teams who are empowered to experiment. One of these ideas is that we should not be scared of failure, and that we should learn to fail fast.
These attempts sometimes elicit eye rolls from the technology teams, especially the idea that we should embrace failure. This is not because these ideas are invalid: in fact, they are welcome to technology teams, and reflect their preferred ways of working. However, technologists have a different relationship with failure than non-technologists.
Are LLMs the air fryers of AI?
Do you know someone who got an air fryer for Christmas? Or did you get one yourself?
If you know someone who got an air fryer, then there’s a high chance that you have heard all about it, and how it has been a complete game changer. They can cook things in a fraction of the time it used to take! And it’s not a fryer at all - it’s really a mini-oven! If you got an air fryer yourself, then there’s a chance that you’ve used it for everything, and that, even now, you are thinking about what you could use it to cook next.
I don’t have an air fryer myself, but am old enough to remember when my family first got a microwave. We lived off jacket potatoes for at least a week, and tried microwaving many things that should not be microwaved (there’s a reason that roasts are called roasts). Eventually, we found, just as my friends with air fryers seem to be finding in the weeks after Christmas, that, while the microwave is a useful tool to have in the kitchen, it’s not the only answer, and certainly not the best answer for everything.
On the 2025 to-do list: figure out AI agents
Recent years have seen waves of AI innovation breaking faster than we can figure out good practice. Organisations around the world are working hard, not only to find ways to put AI to work, but to do so safely and responsibly. The AI to-do list often seems to be growing longer faster than we can strike items off it - but the only route to good practice is practice.
The advent of AI agents promises to add more items to the to-do list. The AI agent wave started cresting in 2024, and will break in 2025. Several major technology vendors and platforms already offer their customers the ability to build, configure and operate AI agents in an enterprise context, and the ability for consumers to build agents or to subscribe to existing agents, cannot be far behind (indeed, it is likely that, by the time this article is published, it will already be happening).
Thinking differently about . . . machine learning
Have you ever been introduced to someone then, five minutes into the conversation, realised that you can’t remember their name? If you have never had this experience, then you have a better memory than mine. Whenever that happens, it feels as if you have a window of acceptable ignorance - a period during which it’s embarrassing but not disastrous to admit your lapse of memory. But, as time goes on, you can feel that window expiring: it becomes more and more awkward to ask the person’s name.
It can feel like this in enterprise technology too: we hear about new technologies, trends and terms every day, and there’s a period during which it seems fine to admit that you don’t understand, and to ask people to explain. But then the new concepts are everywhere, and everybody seems to using them with confidence. How did you get left out? Is it okay to say ‘I don’t understand’ now, or is it too late?
I have to admit that I felt like this for a while with the concept of machine learning.
To lead others, it helps to understand yourself
The best questions are those which make you think. I had the chance to talk to a group of emerging leaders this week, and was intrigued to be asked the question of whether self-awareness was something I had consciously worked on in my own development as a leader.
The time it took me to come up with an answer might be taken as a signal that I’m not particularly self-aware. However, the truth is that self-awareness is something that I have consciously worked on throughout my career. I can say with confidence that in the very earliest stages of my career, as a young software developer, I was self-conscious (in the sense that I was painfully shy and uncomfortable in my own skin) but not particularly self-aware (in the sense that I had limited understanding of my own areas of strength and weakness, of what motivated me and what didn’t, and of what effect I had on others).
Risk Management: the often overlooked dimension of Digital Transformation
I have to confess that I’m not good at keeping New Year’s resolutions: I find it hard to change my behaviour on the basis of a single decision that happens to coincide with the end of the year. The ubiquity of broken New Year’s resolutions also seems to give me permission to break my own. But this year will be different! I’m going to try something that I hope will not be too hard and will be of some interest: to share some thoughts about Digital Transformation which have been bouncing around my head for a while - and through sharing them, give them form.
As these thoughts form, I expect they will almost fall into the classic people, process and technology triad. The one difference is that, when we talk about Digital Transformation, I think that we should talk about practices rather than processes. This might seem like a fine semantic point but, to me, a process is something that people follow (the process is in charge), whereas a practice is something that people do (the people are in charge). Practices seem to fit a world of autonomy, skill and expertise - people doing the work that cannot be done by machines.