Standing on the edge of uncertainty

Photo credit: Ivan Aleksic via Unsplash

The practice of software development is being disrupted.

This might not seem like a new phenomenon. Software development is a practice which seems to be subject to continuous change, in an industry which prizes disruption. Over my own career, I have moved from a world in which code was designed according to heavyweight, structured methods, before being written used bare and basic editors, and tested through intense manual effort, to a world in which code is designed, developed and released in short sprints, written in sophisticated development environments, and tested through an automated pipeline.

However, even though programming is not typing, the activity of writing code has been remarkably stable since the invention of high level languages: assembling logical constructs to make the computer do what you want it to do, and then putting one line after another until you are done. And then debugging it until you are actually done. And then continuously improving it, because you are never really done.

Now, though, the emergence of new AI tools promises to change this activity at a fundamental level for the first time in decades. AI assistants are doing more than accelerate the practice of writing lines of code: they are penetrating the design, architecture and testing parts of the lifecycle.

And the impact of these tools already seems to be manifesting in the jobs market – if you believe headlines and press releases. Tech companies are reporting layoffs, and claiming that they will be hiring fewer developers in the future. Potential students are thinking twice before studying computer science, and new graduates are concerned for their job prospects.

What does this mean for the future of software development, and the work that surrounds it? What should we advise someone contemplating entering the profession today?

I don’t know all the answers, and am aware of the danger of making predictions, but I can share three thoughts which I currently believe to be true.

The disruption is real

I do not believe that we will vibe code our way to complex, production applications connected together in sophisticated architectures which remain stable and agile in production for many years (and I am not convinced that anybody really does believe this).

However, I do believe that it is easy to imagine a developer assisted by a set of agents which help turn user needs into designs, designs into prototypes, and prototypes into reasonable code – as well as helping build tests, pipelines and monitoring tools. I think that the workflow of such a person will look different to that of current developers, and that they may achieve an order of magnitude more productivity.

But . . .

The work is (still) hard

If you’ve used any of the popular AI assistants, particularly those which can be used to build working prototypes from a few prompts, you have probably had the same experience as me: initial delight and wonder at what you can achieve quickly, followed by dismay when you look at the underlying code, following by a realisation that, to use these tools effectively, you will need to get serious. That is, you will need to say rather more than ‘build me an app’, and you will have to be rather more precise about your architecture and coding standards. Indeed, you will probably have to be more precise than you are when working with humans.

Furthermore, you realise that the idea that the toolchain is part of the architecture is more relevant than ever. When you use AI assistants, the models, the tools, the prompts, the documents which those prompts reference, all become part of the solution – and become more things to be managed.

This means that, however accelerated the work is, however productive developers become, the business of building high quality software products will remain difficult and complex, will continue to depend on our ability to think clearly and critically, and will continue to need a deep understanding of the full stack – a stack which just got deeper.

We can be assured that there is plenty of work to come, not least because . . .

The demand is elastic

This is not the first wave of developer productivity tools. We have been automating the work of development and systems management since the earliest days of computing: the compiler was originally a labour saving device. And yet, the amount of work to be done and the number of people we need to do that work continues to increase.

This is because computers are general purpose devices, and we will always find more work for them to do. Most organisations are a long way from digitising all of the work which they do today – and those which are fully digitised continuously expand the boundaries of what they do.

In traditional organisations, our expectations of the amount of digital change we can achieve has been set by the constraints in the system: bandwidth, resources, change slots and, above all, money. I am confident that if we can use new tools to shift these constraints, or to do more within them, we will find that the work grows to fill the pipeline.

So, what is the best advice to someone contemplating entering the digital profession today? I think that it is the same that it has always been: learn. Learn to code because that is the only way to judge the output of your tools. Learn the full stack because the full stack still exists, and is getting deeper and more complex. Learn the new wave of AI tools (and the wave after that) as you would learn any tool: as something which demands expertise and respect, which can make your work better when used well, and can cause mayhem when used badly.

If you can do all of those things, then there is plenty of work to do – not least shaping the future of software development.

Previous
Previous

Revealing invisible ingenuity

Next
Next

Programming is not typing