Lessons from forty years of Excel: if you give people tools, expect them to be used
Photo credit: Rishi via Unsplash
One of the anniversaries I missed last year was that of Microsoft Excel, which turned forty in August.
I think that anybody who has worked in enterprise technology over any part of those four decades will have mixed feelings about Excel. On one hand, they will be grateful for a flexible tool which almost everybody has access to, most people know how to use, and which can be used for modelling and forecasting without the need to run big projects or write complex programmes. On the other hand, they will remember the times when a major upgrade was delayed because of a set of fragile, convoluted macros, when a business critical operation depended on a spreadsheet which only one person understood, or when they were asked to ‘just’ take the logic embedded in a spreadsheet and make it into a system that worked for the whole company.
Whatever their feelings about Excel, it has been a constant feature of the landscape for most CIOs, CISOs, CTOs and architects working today: a mundane, ubiquitous, workhorse of a tool which nevertheless compels as much attention as the largest mainframe or distributed systems.
We are now in a time where enterprise technology teams are making a new generation of tools available to end users, in the form of AI assistants, agent builders and embedded AI. It feels similar to the period when we took away people’s green screen terminals and replaced them with desktop PCs with GUIs, mice and integrated productivity suites: we have faith that these new tools are going to be useful, but we’re not exactly sure how they’re going to be used.
I think that we can learn some lessons by looking back at our experiences with Excel and other tools - especially as it seems that we have had to learn those lessons more than once.
You can’t predict (or constrain) what people will use tools for
When I was first involved in deploying spreadsheets and other office tools, we offered training on the basics of how the tools worked, and the types of uses to which the tools could be put. We found that people found the first type of training useful (and some people came back to it several times to get to grips with the less intuitive behaviour of some tools), but most ignored the second type of training. They didn’t care about the IT department’s opinions on how spreadsheets could be used for managing budgets or forecasting sales: they understood the vocabulary and concepts of their domain, they knew what business problems they needed to solve, and they set about solving them. Spreadsheets ended up being used for purposes which the IT department not only failed to anticipate - they didn’t even know they existed.
As organisations deploy new AI tools, they should learn the lesson that it is helpful to show people how the tools work, but less helpful to show them what to use the tools for. Users will figure it out.
They will become mission critical faster than you expect
In those initial deployments - and many other subsequent projects to migrate between versions of tools, or to manage and secure data - we also tried to tell people which types of data and functions were appropriate for spreadsheets and other end user tools, and which were inappropriate. That guidance was broadly useless, as it turned out that end users were ingenious and intensely pragmatic: if it was useful to get data into a spreadsheet, then that data would find its way into a spreadsheet. If a spreadsheet could help support a critical business function, then it would soon have a role in that function.
This does not mean that organisations should not have policies, standards and guidance about data management: they absolutely should. But they should also assume that policies, standards and guidance will not be enough: they need to be backed up by management and controls which provide assurance and continuity, without compromising creativity and practical use. This is a hard problem, and one that many organisations ignore.
As organisations deploy new AI tools, they should expect them to become business critical within a few days of deployment, and to handle data which they never expected them to handle. Rather than just defining policies which tell people what they should and shouldn’t do, they should invest in the management and controls to treat these tools, and their use, as an important part of their estate.
People will spontaneously become centres of coaching and expertise
When we initially deployed Excel (and pretty much every system ever that offered an end user experience), we did our best to set up networks of champions, gurus and superusers, to whom others could turn when they needed advice. This was useful (and a standard change management technique), and I would do it again.
However, while the consciously created support network helped people who were struggling, it was far less valuable than the organic, informal network that sprang up, as some people became experts in the tools, and other people turned to them for advice. IT departments sometimes regard such people with suspicion, as they are most likely to use tools in unexpected ways, as well as achieving a level of knowledge which surpasses their own - but life goes much better when they embrace them.
As organisations deploy AI tools, they should expect experts to emerge, and to exceed the IT department’s own level of understanding and virtuosity. They should be mature and humble enough to embrace these people, and to learn from them.
AI tools may be new, but the cycle of deployment, innovation, use and expertise is not new: we have seen it repeatedly with open ended, general purpose tools such as spreadsheets. We should expect it again this time round - and make sure that we are ready for it.