Welcome to the age of the puppeteer octopus
Photo credit: Diane Picchiottino via Unsplash
What will software development teams look like in the age of AI agents? Will they look much the same as today, with the productivity of each team member incrementally enhanced? Will they be hybrid constructs, comprising human members and AI members? Or will there be no teams, just networks of AI agents talking to each other in the absence of human beings?
As we learn more about the potential and limitations of the current form of AI, I become increasingly convinced that we are entering the age of the puppeteer octopus.
Recent scientific research has shown that octopi can not only perform multiple actions with their tentacles – pulling, stretching, twisting and so on – but they can do so independently and simultaneously, putting to shame human beings, who struggle to rub their tummies and pat their heads at the same time. (Incidentally, anyone objecting to the use of octopi as the plural for octopus, preferring, perhaps, octopuses, should look up the history of this contested word – or maybe use it as a way to confound their favourite LLM).
This conjures up the image of the future software developer as a species of octopus, building a portfolio of agents that perform different parts of the software development lifecycle: an agent for architecture, an agent for coding, an agent for code review, an agent for testing, an agent for deployment and so on. In this image, the developer sits in the middle of a set of agents acting on their behalf, directing, nudging and checking their work.
But if we can build the puppets well enough, why do we still need the puppeteer? Perhaps the puppeteer octopus is itself an agent, orchestrating the behaviour of other agents.
I believe that we will build solutions with orchestration agents – indeed, some people are already doing so. But even these solutions have a human designer somewhere: some of the puppets may themselves by puppet masters, but they are still puppets.
The reason that I believe that the ultimate orchestrator is still a human being (the brain of the octopus at the top of the puppet hierarchy) is that, just like all other technologies, LLMs, and the agents that depend on them, have limitations.
These limitations are not always apparent, because LLMs have been trained to be plausible people-pleasers, and have a tendency to tell us what we want to hear – for example, that our generated code is extraordinarily high quality and that it has passed all tests (if you doubt this, check out examples of LLMs generating tests that automatically pass – because they are oriented to the goal of showing successful tests rather than finding broken code).
But the limitations are still there. LLMs have clear superpowers – they operate faster than humans, they are capable of processing natural language, and they have an inherent encoded knowledge base it would take us forever to acquire, even with the Internet and a decent search engine. But they lack intent, do not display true originality, and are unable to reason well without context. We have to bring all of those things.
Becoming a puppeteer octopus may sound like an exhausting future: do we really want to be the multi-tasking plate spinners in the middle of a whirlwind of agent-based action? But it is a natural next step in the evolution of technologists using technology to make their work more powerful and more efficient. This evolution began with the compiler, continued through the development of automated testing tools and CI / CD pipelines, and now reaches its next stage with AI agents. We have been growing tentacles and learning how to operate them independently since the first programmer thought for the first time that they could write some code to avoid a repetitive task.
I believe that the work of software developers and architects may be about to become fundamentally different: we may spend more time creating context and orchestrating agents than writing lines of code. However, I also think that the work remains fundamentally the same: designing and organising technology components to do useful work.
And we have the same responsibility and accountability that we always did – to make sure that we do this useful work safely and securely, and that we do not let the awesome power of technology place our users, their data and the services that depend on at risk. The puppeteer octopus is accountable for the actions of all their puppets, and we need to figure out best practice – the octopus code? – rather quickly.