Leading with integrity: ethics does not equal compliance
Photo credit: meisam saeb via Unsplash
This is the most important programme in the world because the regulator says so.
It’s a regular ritual in any large corporation: reviewing the proposed change portfolio for the coming year. The list of proposals is always longer than the company can afford, and, even if it could afford to do everything, there are only so many things it can do. Naturally, everyone proposing a change programme tries to make it sound as if it is more vital than all of the others: it is the one thing that will cut costs, boost revenue or protect the organisation from existential threats.
In regulated industries, such as finance, utilities and healthcare, one of the strongest justifications offered is compliance. The idea is that, if the change is required by law or regulation, the company is obliged to do it: there is no choice.
This leads to loads of proposals that open with statements similar to that at the top of this article, or something like ‘We must improve our ability to treat customers fairly to comply with treating customers fairly regulation ABC123,’ or ,’We must reduce the risk of service disruption in order to meet operational resilience regulation XYZ456.’
I’ve reviewed lots of investment proposals, and I am always disheartened by statements like this. I don’t blame the people submitting them: they want to get their programmes funded, and they are doing what they have been told to do. I also mind regret the level of investment made in regulatory compliance: there are good reasons that regulated industries are regulated (and, usually, whenever they are deregulated, they prove the case for regulation once again).
What I regret is that companies feel compelled to justify investments in changes that they should be making anyway by saying that they are being forced to do them. It would be much better to read proposals which said, ‘We must improve our ability to treat customers fairly because we believe in treating customers fairly,’ or ‘We must reduce the risk of service disruption because we owe our customers services that they can rely on.’ But perhaps those proposals would not win support.
This year, and in coming years, change portfolios will be full of proposed investments in AI. In most industries, in most of the world, these proposed investments won’t be justified on the basis of regulation or law, because those regulations and laws have not yet been defined or enacted. And, if the field of AI continues to develop at its current pace, then, even when those regulations and laws are defined, they won’t address the latest generation of AI: innovation will outpace regulation.
This leaves people who aspire to be enterprise AI leaders with a choice. They can either align their plans to laws and regulations which don’t exist and assume that they are allowed to do whatever they want because nobody has told them that they can’t. Or they can look elsewhere for guidance and attempt to act in accordance with that guidance.
Unsurprisingly, I believe that they should do the latter. And I believe that this guidance comes in two forms.
Firstly, it comes from experts who have been thinking about these problems for some time: ethicists, philosophers, activists, lawyers, regulators and policy professionals. AI, like all new technologies, presents us with novel problems which we (and the cultures and societies to which we belong) have not yet had time to think through, or to develop robust moral instincts and reactions to. We would do well by considering what has already been thought, said and written - even if we disagree with it. (And considering means genuinely considering: this a problem which we must apply to the grindstone of human reason, not rely on AI summaries.)
Secondly, it comes from ourselves. Even if we have a tendency in corporate life to rely on formal rules - policies, procedures, law and regulation - to tell us what to do, we have rich moral and ethical lives outside work. We find it hard to look at the world, to read the news, to live with others, without passing judgement, without feeling the engagement of our ethical instincts. Many organisations encourage their staff to ‘bring their whole selves’ to work. It’s not always clear how sincere they are, or whether they are inviting people to bring their entire moral selves, but I believe that that is what we should do anyway, particularly when navigating complex, difficult fields such as AI, with so many potential consequences for so many people.
There are moments when leaders must figure out what they stand for, how they lead, where they choose to invest, and who they choose to work with. Law and regulation won’t tell them the answer. The arrival of AI is one of those moments.