“Data ethics” hardly has the ring of romance.
Or of great urgency. Yet business leaders world-wide face a minefield. They must traverse it with their integrity in tact.
It’s no longer a matter of personal choice of whether or not they want to enter this territory.
There are new persistent, growing and hard to resist demands from stakeholders, employees, politicians and others. They expect business leaders to consider what data ethics means for them and their companies. Whether it’s Facebook being challenged over user privacy; or Google attracting criticism for how its search engines distort findings to its own advantage. We simply cannot go on ignoring data ethics.
Take for example automated decision making. Or the use of artificial intelligence (AI). How decisions emerge from complex algorithms can have serious ethical implications. Yet often the exact consequences remain cloudy. Even to those most concerned.
AI experts admit to problems keeping up with what’s happening. Or what might happen—-“if it can go wrong, then at some point it will go wrong”.
Back in 2016 for example, there were 40 acquisitions of companies working on AI. Then the market valued AI as worth around $644 million. The best guess now is that it has since doubled. Beyond that the projections become meaningless trillions.
As for economic displacement in which robots take over jobs done by humans, that seems inevitable. A study by the University of Oxford in 2013 found nearly half (47%) of all job categories at risk from automation.
Since May 2018 the General Data Protection Regulations (GDPR) expects the new breed of Data Controllers to explain the logic behind any automated decision making systems decision making with an unknown ethical component.
Likewise a new UK Centre for Data Ethics and Innovation in the UK will soon put an informed and sustained spotlight on this issue. This work will include concerns about the expanding influence of “big data”.
The Institute of Business Ethics has also tried to explain what an ethical dimension for AI means. Using a somewhat elaborate mnemonic it offers 10 signals for the ethical dimension. Each has a clear explanation which appears when you click on the term shown in the original pdf document. Click on “Accuracy ” for example. Up comes a simple statement such as ” a company needs to produce clear, precise and reliable results.”
Building in data ethics
A clear need exists to build the ethical dimension into each stage of our AI journey. Without this we can expect the current polarisation of wealth and resources to worsen. In turn this may have major adverse society consequences further on.
Vast flows of millions of bits of data can be too opaque or complex for any human being to unravel. Only an AI or learning mechanism may be able to uncover the trends hidden in the material.
Concerns about data ethics and AI implications already attract public concern. For instance enthusiasts for face recognition technology cannot escape its ethical dimension: Is it safe, can you trust the results?
This year, for example, German hackers tricked a Samsung Galaxy S8 iris scanner with a picture of the device owner’s eye and a contact lens. This was in the same month that a journalist fooled HSBC’s voice recognition security system.
Nor are computer scientists immune to finding AI over whelming. For example, some of them gave a computer an impossible task. They told an AI to make a robot walk without its feet touching the ground. To their surprise the AI just flipped the robot upside down, so it could use its elbow joints as feet. No immediate ethical issues here. But what if one day the instruction becomes: “Find a “legal” way around the law?”
Worries about how AI works keep surfacing. There are dire warnings of an existential threat to humanity. For example some top technology bosses privately believe that advances in AI may ultimately harm humanity. Yet many may feel constrained from saying so out loud for fear of affecting company profits.
Ethical business leaders cannot ignore the changes happening before their eyes. Even if they do not fully understand what’s at stake. In the last two years, for instance we’ve seen widespread use of the new technology. It includes fingerprint and facial recognition, driverless cars and other breakthroughs with ethical implications. As a new report from the Chartered Accountants institute in Australia and New Zealand puts it:
“We are without doubt heading towards a new decade of disruption.”
In reviewing the coming changes the well-written report warns that AI can:
…totally reshape our existing social landscapes as well as our economic order that brings with it serious ethical challenges.
It too demands that tech giants alone should not decide alone on the future advances.
A large leadership challenge lies ahead to build ethics into robots and AI. This one is not for the techies alone to resolve.
Leaders too need to stay awake to the ethical implications of AI. Machines can already do much of what humans can do. They drive, fly aircraft, run recognise images, process speech and translate. Yet there remains the missing ingredient of moral reasoning. Those business leaders who ignore this ethical dimension may reap a whirlwind of public disfavour.
Artificial morality though, remains a controversial next step. It’s unclear whether we can teach robots how to tell right from wrong. So far, there’s no firm regulatory framework within which this work might develop.
Isaac Asimov had a go in 1940 with his three laws. Robots must:
- Not injure a human being or, through inaction, allow a human being to come to harm.
- Obey the orders given it by human beings except where such orders would conflict with the First Law.
- Protect its own existence as long as such protection does not conflict with the First or Second Laws
But these cannot deal with the dilemmas of say, driverless cars. A vast number of new questions keep surfacing around our relationships with these “intelligent” machines. They possess no social conscience. They may not be people, yet they still make decisions. They possess a kind of autonomy and independence yet without a moral or ethical dimension.
In 2017 US and European experts outlined seven principles on transparency and accountability for AI systems. See panel. While these can be helpful, the approach still misses an ethical component that human beings can bring to tech development.
Ethical business leaders can play a useful part here. They can encourage a debate, and not allow the issues to stay hidden with the techies. As a recent report from the RSA explains, there needs to be attention on
- AI safety–making sure the systems do not harm society
- Malicious uses–guarding against malign influences using AI
- Data overlap and protection –overseeing the use of personal data by AI systems
- Algorithmic accountability–clarifying who is responsible for the computer instructions
- Socio-economic impact–such as worsening inequalities of wealth and power
The role of Business Leaders
Business leaders of non tech companies therefore have an important role to play in how AI and machine learning advances. They can help protect humankind. In particular they can ensure these technologies stay in step with our human understanding. Even focusing on better standards of privacy can be an important leadership action. Leaving the responsibility with the techies is not a viable option.
So what exactly should the ethical business leader be concerned about? And where should they direct their attention? The panel below outlines some areas where ethical business leaders can seek to make an impact.
These 5 concerns and others mean, business leaders must clarify their own ethical standards. Not just those of the companies they run. Issues of fairness, respect for others, openness, integrity and building trust will matter more than ever before.
To be an ethical business leader in the age of AI and machine learning means being willing to re-invent oneself. So a fair question for any concerned business leader is:
“How do I re-invent myself?”.
While each leader must find their own way forward the route includes:
- Concern to protect customers
- Preserve the common good
- Be socially responsible
- Take new levels of care over the design and planning of new products and services.
For instance responsible business leader can ask simple questions such as:
Do we have a viable AI ethics code?
How strong are our existing governance rules?
Can we find a third party to help us avoid manipulation and bias in AI and machine learning?
Can we unravel what lies behind our chosen AI algorithms?
Are we spending enough time assessing the impact of what we’re doing in the area of development?
As with so much of what leadership is about, what matters most is willingness to ask the right questions. And keep paying attention.
You may also like:
Machines can learn, but what will we teach them, 20018, charteredaccountantsanz.com
Artificial Intelligence: The ethical dilmension, IBE, 26th January 2018
The State of Artificial Inelligence 2017, Insidesales.com
Sizing the market value of Artificial Intelligence , Forbes April 2018
Valuing the Artificial Intelligence Market, Graphs and Predictions, TechEmergence, last updated on August 12, 2018 by Daniel Faggella,
Jon Carol, A new company every Week, Guardian May 2017
C. O’Neil, Audit the algorithms that are ruling our lives, FT 31st July 2018
Andrew Leigh, How ethical business leaders can make sense of Artificial intelligence (AI), March 5th, 2018, www.ethical-leadership.co.uk
Artificial Intelligence, Real Public Engagement, RSA 2017
J.Titcomb, Tech giants need to build ethics into AI from the start Daily Telegraph, 14 May 2018