How ethical business leaders can make sense of Artificial intelligence (AI)

Artificial intelligence (AI) is already here.

At least in a most basic form. Jobs previously doable only by people can now be automated and the trend keeps accelerating.  What passes as thinking machines are replacing human tasks and jobs. They are altering the skills organisations want from their people.

growth of AI

 

 

 

 

 

 

Even in the most sophisticated businesses, ethical business leaders must learn how to master AI, before it masters them. Ginni Rometty, CEO of IBM, for example, says her organisation is

“Building the future of the company on machine learning.”

 

It’s therefore fair to ask:

“What does AI mean for us?”

It’s a particular challenge for ethical business leaders, let alone those who suffer from a faulty moral compass.

Society is starting to run scared. Science fiction has long spelled out doomsday scenarios. Now though, real life luminaries such as Steven Hawkins have issued dire warnings. AI could indeed mean the end of mankind.

Some believe this fear is overblown. Despite obvious advances in AI, there are no convincing signs of genuine, sentient machines. That is, ones that are fully self-aware and in need of moral guidance.

Still, governments, business and the public are rightly starting to demand more accountability in the application of AI technologies. So, how far do ethical leaders need to make sense of AI. What’s the connection? In particular, how do leaders make sense of the many legal and ethical issues that keep surfacing?  

base campAt base camp, one view of the summit looks clear. We need an ethical framework to help with decision making and to ask the right questions.

The reasons for wanting such a framework stem from the dangers AI seems to pose. For example, we have no real idea what happens when an AI machine learns.

AI talking to each other
When machines develop their own language

In one celebrated case, the techies rushed to turn off their learning machines when they realised these were rapidly evolving their own, impenetrable language. 

Developments in AI are rapid, varied and hard to classify. About the best criteria is to sketch out what we might expect in the next three, five or 10 years. Even that looks clumsy and not entirely convincing.

Or we can classify AI into crude learning groups:

  • Assisted intelligence—automating repetitive learning already happening widely;
  • Augmented intelligence—fundamental changes to the nature of work in which humans and machines collaborate to make choices, happening to some degree in many places;
  • Autonomous intelligence –automated systems taking over decision making and the roles left for humans are in doubt—it’s already happening in some places, from farming to medicine, from transport to aerospace.

Beyond that we are already facing AI machines tainted with ethical, gender, and other types of bias. This is not so much learned as built in by human beings using their own unconscious biases.  

Nor have we yet established whether all interventions with AI machines really do generate better business performance. For example, if we leave hiring choices to AI machines could this be doing more harm than good?

Some firms already rely on AI to weed out candidates for interview. Applicants can be rejected without ever interacting with a live human being. That is not merely unethical, it is the height of stupidity in wasting potential.

What exactly is AI?

AI lacks a broadly agreed definition.

But in simple terms it’s technology that replicates basic human functions. It’s about computers doing things ever smarter than we used to expect of machines. AI brings computers closer to what we thought only humans could do.

Recent advances all involve some sort of neural network. This is modelled on how we believe the human brain works. Machines and computer systems simulate human intelligence. They seem to: learn; reason and evolve—using newly acquired information.

In essence therefore, AI is machine learning. It’s computers working things out for themselves, but without being explicitly programmed to do so.

Instead, they progress by processing and analysing huge amounts of data, identifying patterns and improving their performance as they go.

All this is scary because AI is apparently taking over much of our world. We fear it could threaten to our human existence through its impact on

                              ethics; the workforce; technology

For example DeepMind’s CaseCruncher Alpha beat a team of UK lawyers in a competition to predict the outcomes of court cases. And AI-enabled machines are starting to outperform specialist radiographers at detecting early signs of cancer. We hardly think twice about decisions in professional tennis being made by “Hawkeye” and football is going the same way. How long will we need human refs?

Do AI machines think? Not as we know it. They’re not fully self aware or sentient—yet. They’re still missing important yet familiar human characteristics such as:

empathy, a sense of justice, integrity—knowing right from wrong, a natural gift for feelings and emotion, respect for privacy; passion, compassion; suffering, pain, love, human wisdom and so on. 

These missing elements partly explain why we fear AI—all brains and no humanity. Despite this: Regular use of AIAI relies on algorithms or decision routines of ever increasing complexity. It’s becoming more difficult to make sense of how they work.

algorthmsAI technologies are not ethical or unethical, per se. The real issue is around the use that business makes of AI and whether it undermines human ethical values.

Issues for Ethical Leaders

Of the many issues posed by AI for ethical leaders, some are more easily grasped than others. Here are some that an ethical leader may need to confront.

Need for an AI Ethics Officer: As AI penetrates all aspects of a business, leaders will benefit from having a human adviser to help steer the ship through the ethical breakers.

An AI ethics officer would highlight not just ethical dangers. But also enable business leaders to see the bigger picture. AI has so many ramifications.

The business may need to develop its own clear philosophy and approach in making the best use of AI while managing the attendant risks.

Profits versus ethics: Ethical business leaders must take seriously the implications of AI when it means hundreds or thousands of job will be destroyed.AI and profits

The assumption that innovations will automatically create new jobs to compensate could prove to be dangerously complacent.

Leaders need to review carefully what introducing AI on a large scale means for the local community and to plan accordingly.

Previous failures  associated with globalisation provide a warning that ignoring the social consequences can be de-stabilising, not just for a community but for the company itself.

AI can make mistakes: When a human makes a mistake it may involve a single wrong transaction. In contrast, AI can systematically affect all transactions on a large scale.

It is harder if not impossible for AI to admit it is wrong or mistaken. Leaders need to be aware that AI may need special attention over handling mistakes.

 

Rush to use: Enthusiasm for AI can mean leaders take short cuts and fail to test the implications of the processes.If a program learns through new data then inevitably there will be unpredictable results. 

For example, US trading firm Knight Capital lost $440 million in a computer glitch in 2012. The firm sold all the stocks it accidentally bought Wednesday morning because a computer glitch.

Much of the volatility in money markets for example, has already stemmed from computers  being allowed to make decisions without full human oversight. Trends become exaggerated leading to runaway mass movements.

Some experts point the finger of culpability at computerised High-Frequency Trading (HFT). Algorithms and software do not muse about global economic events. Instead, they merely chase mechanical patterns they’re programmed to find. Such as as movements in trend or momentum. They do not make decisions based on real-world eventualities, such as political events.

data delugeData deluge: AI partly relies on the ability of the technology to make sense of huge amounts of raw data. An ability to track ever more data raises important issues for business leaders around privacy and how such data analysis will be used.

Ethical leaders need to ensure there is a continual review of how personal data is used when being processed by AI.

Trust: Are we right to trust AI? Business leaders who fail to address this issue could find themselves facing serious risks to reputation and branding.

The Oxford Internet Institute has called for a European AI watchdog to police the way the technology is implemented.

Its authors suggest sending independent investigators into organisations to check how their AI systems operate, and propose certifying “how they are used in critical arenas such as medicine, criminal justice and driverless cars…

 

“We need transparency as far as it is achievable”
Luciano Floridi, Oxford Internet Institute 

Framework of values and criteria: for all the mush about corporate values, the essential core of integrity remains what it has always been—honesty, practical wisdom, courage, self control and justice. We need to apply these even more stringently with the coming wave of robotics and the idea of thinking machines.

For example, ethically-minded business leaders need to establish a framework of values and criteria incorporating the ethical impacts on an organisation’s products and services. Can a program for instance be given a sense of values and beliefs. Business leaders need to address the issue of “what path will we set AI to walk?”

What are the founding values for an ethical framework of AI in business? According to one study the main components include:

  • Accuracy, respect for privacy, transparency and openness, interpretability, fairness, integrity, control, impact, accountability, and learning.

Hold to account: How do we develop a shared guide on AI developments that protect the common good? So far, we have no way of holding AI designers and developers to account.

A key issue concerns legal liability. With ever more elaborate algorithms, the consequences of these may be far removed from the individual who wrote and developed the Code.

Without clarity around legal liability for instance, we cannot have proper accountability and governance of artificial intelligence. Without such accountability and governance we place multiple stakeholders and future generations at risk of unmitigated harm.

For example, if driverless cars make decisions which adversely impact innocent bystanders, who is culpable? Ultimately, blame must lie with human individuals – this should be a key principle.  

AI from I Robot

One rule to bind them all: Can we develop a basic rule for AI that can be universal and robust within business?

For example, Asimov’s three rules on robots may look credible. But as shown vividly in the film I Robot, the rules can potentially be subverted in the so-called interest of human beings.

Business needs to adopt some principles for applying AI that are both credible and viable. 

Democratic control: We need new regulators to oversee and limit the application of AI and to ensure that our Government has control over any products and services imported from abroad. The aim must be to ensure local democratic control of the impact on our society.  

Education: Almost a quarter of the UK’s population lack basic digital skills, let alone an understanding of machine learning. We need to prepare people for how artificial intelligence may impact their work, private lives, relationships and future well being.At  T Investing

  • People need a basic understanding of the use of data. For example, how these systems will become an important tool required by people of all ages and backgrounds.
  • New mechanisms are needed to create a pool of informed users or practitioners. This will help various sectors and professions absorb the use of machine learning so these are useful for them
  • Further support is needed to build advanced skills in machine learning. There is already high demand for such people and there is an urgent need to increase this talent pool.

The French Sodexo company has a global workforce of nearly 450,000 and, 50% or more of its current jobs could change or disappear.

Sylvia Metayer of Sodexco “What keeps me awake at night is that …. everything leads us to believe the changes and the disruption are going to come at an incredibly fast pace …. So, it’s a matter of urgency that we seize the opportunity to mitigate the challenges.”
Sylvia Metayer, Sodexo’s Group Executive Committee

 

robot with binocularsThe Big Picture

For ethical leaders, making sense of AI depends on having the right starting point:  retaining core values so that one retains a sense of identity.

It means acting now, not hoping for some non-existent period of stability.  It also means accepting that automation and artificial intelligence will affect every level of  business and its people. 

Ethical leaders wondering about their connection to AI should not expect to leave IT or HR to solve many of the issues arising. It means leaders who ask basic questions such as:

“What is our place in an automated world?” 

It’s less about worrying about technological innovation and more about helping colleagues and society adapt that technology for the benefit of humanity and not just a privileged few.

While AI is important, ethical leaders need to keep it firmly in perspective. There are other, equally important mega trends to take into account. These include:

  • demographic shifts, rapid urbanisation, shifts in global power and resource scarcity and climate change.

All these plus AI, are contributing to a world of uncertainty, risk and challenge. AI technologies have the potential to enable companies to do more with fewer resources. They can streamline business processes, and improve product quality while driving better profitability.

Yet like so much else in business, these still demand effective leadership and certainly an ethical leadership that retains a focus on what it means to be human.

some key questions to ask

Sources:

  • The State of Artificial Intelligence 2017, Inside Sales Labs
  • Business Ethics and Artificial Intelligence, Business Ethics Briefing, Issue 58 Jan 2018 IBE
  • Machine learning: the power and promise of computers that learn by example, Royal Society April 2017
  • Rise of the robots, Can we turn AI into a force for good? www.ethicalcorp.com, January 2018
  • G. Dondé: Comment: ‘We can’t leave Silicon Valley to solve AI’s ethical issues, Ethical Corporation, Jan 18, 2018
  • The Workforce of the Future. Competing Forces 2030, PWC 2017
  • Outlook for Global  Artificial Intelligence, Allianz Global Investors, Jan 2018
  • Ex machina: are computers to blame for market jitters? The Conversation, January 27, 2016

Acknowledgements:  Thanks to those members of my Linkedin network who kindly responded with suggestions for this article.

2 Comments

  1. Two other dimensions to consider:
    – the need for a kill switch
    – framework for testing

  2. Very thought-provoking blog, Andrew. Many dismiss the magnitude of the AI movement but keeping one’s head in the sand will only result in a more painful “wake up call”.