It’s hard to ignore the hype surrounding Artificial Intelligence. If it hasn’t been deployed within your organisation already, the deployment of AI within most businesses looks inevitable. Decisions being made right now about AI within your organisation bring significant opportunities such as cost savings, new insights into data patterns and enhanced customer service but it also brings risks and vulnerabilities.
As a senior executive, you can’t simply stand on the sidelines and leave such a crucial innovation to the data scientists and technologists. Entrusting the purpose, strategy and values of an organisation to an AI requires the oversight of the Board and the Senior Executive team. So what are the questions you should be asking to manage risk, avoid potentially harmful liability and maximise the opportunities?
We recently held an event for senior HR leaders from FTSE 100 and 250 companies about the implications of AI as it increasingly makes its way into a wider range of businesses. The discussion was led by Robbie Stamp, CEO of Bioss, TEDx speaker, attendee at the All Party Parliamentary Group on AI and member of the IEEE Working Group on Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems.
Robbie helped the senior leaders in attendance better understand the emerging relationship between human judgement and machine judgement and decision-making.
Robbie’s core premise was that non-technical Executives should ask one fundamental question – “what is the work we are asking the AI to do in relation to our strategy and our values?”
That means being clear about what the AI is being “tasked” to do, which part of the organisation’s strategy, purpose and values it will be “entrusted” with and how you are going to review the work it is doing.
We have a new set of “working relationships” to consider, especially as AI’s ability to manage more complex tasks grows at pace. At the same time, we need to guard against “magical thinking” about AI and not to overestimate in any given area of the business what it is capable of.
The AI Ethics ‘Fallacy’
We are currently experiencing a fundamental shift for human kind. The advent of AI is different from previous technology disruptions.
We are entering an age where an organic life form has created an ‘artificial’ intelligence that is capable of drawing on massive data sets to problem solve, make decisions and provide insights which in some cases human beings might not have had.
There is much discussion about “AI Ethics” but can AI itself really be ethical?
AI is not human. It cannot feel pain, shame, guilt or remorse and it cannot be meaningfully sanctioned (punished). In systems built on the notion of Human Accountability (Boards, Governments) this is not a trivial concern as AI is given more and more work to do.
Secondly, the idea that we ‘know” what the ethical position that we should programme or train in an AI is also contentious. For example, should the AI have Christian, Islamic or perhaps a Confucius ethical perspective? A “Western Liberal”, an “atheist” perspective perhaps? Reaching a globally accepted viewpoint about AI ethics will be a hard problem.
This leaves us for now with Ethical AI Governance by human beings.
AI Governance
An executive’s role is to deploy the organisation’s resources according to the strategy set by the board, and for both Board and Executive to keep under review whether the organisation is ”working” effectively to deliver, whatever “purpose and strategy” may be.
It is for this reason if for no other, that the deployment of AI is too important to be left solely to your data scientists. Senior executives and HR leaders in particular need to be involved at each step to question, challenge and review both the opportunities and the potential limitations or liabilities of the “work” the AI is doing.
AI is already ushering in a new kind of ‘working relationship’ between individuals and teams as it grows in its capacity to deal with complex tasks in the future.
So how do you ‘govern’ AI – interestingly the word govern comes from the Greek kubernan ‘to steer’)
Most people asked that question would reach for a rulebook. The problem however with ‘10 commandment’ style rules is that they are open to gaming, bias, exploitation and are likely to break in the face of rapid change. Instead it may be more useful in the face of the uncertainty about the pace of change to have certain key boundaries, that people deploying AI in an organisation, keep under review.
The use of Artificial Intelligence enterprise-wide, and not just for HR applications, is an imperative that HR needs to be aware of and play a leading role in, to inform and challenge the leadership and technologists. There are undoubted organisational benefits, but there are also broader implications and unintended consequences of AI being utilised in the business without due consideration of governance and where the line is drawn between advisory and authority, or even abdication of control.
Lisa Gerhardt, Partner and Global HR Practice Lead, Savannah Group
Defining boundaries
So if you didn’t create a “Ten Commandments” rulebook for AI, how would you define the boundaries for its operation? Robbie Stamp and his company Bioss have developed a Governance Protocol that has five elements to it. Within each one is a thought experiment to help understand some of the implications of putting AI to work in your company.
1. Advisory
Would the AI work in your organisation in an advisory capacity? Does it leave space for human judgement and decision-making where a human can decide whether or not to follow the advice of the AI or not?
It is easy to believe that “data” somehow has a purity to it that human “bias” does not. But beware of this kind of magical thinking. For example, a computer program deployed by the US penitentiary system advised that black people were almost twice as likely to re-offend than white people, but the reason why this computer program provided the “advice” that it did is because the data it relied on was built on patterns in society which were already problematic in relation to race. What biases could your legacy data skew towards?
2. Authority
Have you granted an AI authority consciously or unconsciously over any human beings? If you were at Uber or Deliveroo the answer is unequivocally yes. What will it now mean to be managed by an algorithm?
Imagine someone is in a bad car accident. She is being operated on and an AI is monitoring their vital signs. The patient starts to bleed out, and the Surgeon has 60 seconds to make a decision about what to do. The surgeon thinks one thing but the AI “recommends another”.
Who should make the call? The surgeon, like any human, has a stronger emotional connection to the situation and could be tired of hungry. Who has the Management granted the final authority to in those circumstances?
Suppose in those circumstances the hospital has decided to “back the AI”. In that moment, the AI has authority over the surgeon who is now carrying out instruction. If the patient still died the hospital would then be liable for its actions. Who is liable in that situation if there is a problem? Who has authority over the AI?
3. Agency
What agency have you granted the AI (if any) to commit resource in one form or another on behalf of the company without a human being in the loop?
For example, take the Netflix recommendation algorithm. Netflix has granted its algorithms authority to recommend movies without human involvement. If I don’t enjoy the movie very much no harm done. But what if we went a step further and the algorithm recommends ‘13 days to kill yourself’ – a popular Netflix series – to a vulnerable teenager.
What if it has agency to spend money? Interact with a customer, write its own code? Buy and sell trillions of dollars of stocks and shares? Do your AI’s decisions directly impact perception of your brand, open you to risk and liability of some kind? Who is reviewing its work?
4. Abdication
What do we abdicate to an AI when and in what circumstances? Consider driverless cars, one of the most talked about areas of AI deployment. Drivers are advised that they should still keep their hands on or close to the wheel when the car is driving itself, but after 100+ hours of being driven by an automated car, will most people still stay so attentive?
Ask someone under the age of twenty to point out the direction of true North and they’ll likely have no idea. They have already abdicated the responsibility of navigation – something that was fundamental to human survival through most of our history – to an app on their phone.
Finally think about the long-term effects of abdication. Take lawyers as an example. When performing a case review, it might take five bright associate lawyers a week to do what an AI could do in seconds. However, what about those lawyers in the future, having not gone through that process – how will they develop their judgement in their careers and go on to become Senior Partners themselves?
There is not a value judgment being made here that abdication is bad, just a reminder to keep a close eye on this boundary and think through the short and long term consequences of crossing it.
5. Accountability
Are lines of Accountability clear? This is a critical issue and underpins each of “Advisory, Authority, Agency, and Abdication.”
For all the fallibility of human institutions, accountability lies with boards and governments
Who is accountable if the AI is found to be acting unethically or outside the law? For instance, imagine you’ve brought in AI to screen CVs. Unbeknownst to the HR team; the AI has been throwing out people with African sounding names for weeks before anybody notices. Under the new GDPR regulations, a rejected candidate challenges the algorithms decision and 6 months later the company has a class action lawsuit against it.
The challenge of ‘explainability’ highlights how murky accountability can be. Michal Kosinski is a Stanford researcher who created a photo recognition system on an AI neural network. The AI was able to tell with 91% accuracy in men and 83% accuracy in women whether they were gay or straight by looking at five photos of the individual. When asked, he said he didn’t know why the neural net was able to make those predictions so accurately. He could not explain and neither could the AI.
As a business leader if you make a decision, you may well be asked to explain how you reached that decision. But if AI is making a proportion of decisions for you or your department, how will you be able to explain the reasoning behind a decision and if you are accountable for its decision and have granted it significant agency for example, that might really matter.
What Next?
There is no doubt that the efficiency benefits AI will bring businesses and individuals will be significant, however the costs of poorly thought out implementation in both the short and long-term could be damaging to an organisation’s reputation and its bottom line. Executives have a responsibility to develop their own personal understanding of AI and the moral and societal issues that surround its implementation so that they can challenge and ask the right questions as it is brought into their organisation.
Special thanks to Robbie Stamp, CEO from Bioss for the insights that are included in this piece. You can find out more about Bioss’s AI governance protocol here.