Rise of the ethical robot and what it means for us and our leaders

Ending the sale of cigarettes and tobacco products at CVS pharmacy is the right thing for us to do for our customers and our company to help people on their path to better health.  Put simply, the sale of tobacco products is inconsistent with our purpose. CEO Larry Merlo , Feb 6th 2014

Today we rely on  leaders to make these kinds of ethical choices. We will not be able to say this for much longer.

In a few years the store boss may not even take such a decision. Instead a self-aware device will trundle into the room to announce: “I’ve told our stores to stop selling fizzy drinks; they’re bad for people’s health.” robot car

“Kill the chicken, not the child” is how those developing robotic, self-drive cars highlight new moral dilemmas. “Bomb the terrorist hideaway, not the Mosque” is what the geeks in charge of drones must incorporate into the decisions of their automated killers.[1]

Robots can be great if you need to land a plane in a fog. Few pilots though want to abandon all control to them. They’re right.

Computers in charge of decisions have not always proved helpful. For instance, some people think the financial crash of ’87 was due to computer-based trading. A vicious cycle swirled out of control. In response to a drop in prices, automated programs dumped masses of stocks on the market, causing other automatic systems to do the same. [2]

The self-drive car and the drone deluge are current symptoms of  new kinds of ethical issues starting to face human kind. Attempts to mechanise dealing with them are well underway.

Many companies already depend on technology to monitor peoples’ behaviour, teach employees to tackle compliance dilemmas and to warn of illegal or irresponsible actions.

Making choices more rational, rule-based and supervised by high IQ androids may seem fantasy. With the unstoppable growth in computer capacity and advances in artificial intelligence (AI) though, serious questions arise of how far we should delegate these ethical choices.

Some experts argue the sooner we do it the better. Humans suffer from an over reliance on intuition, are riddled with inconsistency, tend to be vulnerable to distractions, and prefer to trust too much in “the rules.”

Human drivers are engaged in making ethical decisions as they drive, and these will have to be programmed into the software of the self-driving car”[3]

If robotic vehicles become acceptable millions of deaths on the roads may one day be avoided. Soon we could be amazed we were allowed the freedom to drive them at all.

As ethicists train robots to act like people, the dilemmas grow along with them. For example, a recent European Survey suggested robotic milking will  radically change  dairy farming.  They raise issues such as a shift in the role of the farmer from care-giver to co-caretaker in relation to the cows. [4]

robot milking James Gips asks:

When our mobile robots are free-ranging critters, how ought they to behave?” In exploring the basis for automated choices he further wonders: “Could a Robot be Ethical?”[5]  

As we build the artificial reasoning systems the hope is we learn ways to become more responsible. In tracking down the knowledge and assumptions built into the ethical theories perhaps we ourselves will be changed.

It’s a big jump from an automated milking system to a super-intelligent mechanism capable of dealing with complex moral questions. Soon though the CEO of an organisation may turn to the company’s robot ethicist to ask:

  • Will this decision have damaging consequences for people?
  • Can our supply chain be more socially responsible?
  • Is it right to invent this kind of drug?
  • Should we stop selling these particular goods?
  • Is an investment in the community justified?
  • Why invest beyond meeting current legal requirements?
  • Do we face a serious reputational risk?
  • What is the right amount to spend on health and safety?
  • How do we strengthen our ethical culture?

There are many tales of SatNavs sending drivers the wrong way or up down a dead end.  These warn of of the potential of over reliance on artificial intelligence.

We must accept the role of subjectivity, judgement and knowing what makes sense. Companies choosing to rely on robots for handling their ethical dilemmas will be misguided and end up with costly consequences.

Smart ethics machines may be taught to recognise the patterns and inconsistencies that underpin our decisions. What they should not do is deprive us of the freedom to choose, making important value judgements on our behalf.[6]

There is huge difference between an algorithm and taking the decision to act on it. A computer solely concerned with the processes of reasoning will fail to be useful when it comes to ethical choices.

For the moment at least, ethical leaders are safe, not yet easily replaced.

ethical robot

 


Comments are closed.