Queen's Gazette | Queen's University

The Magazine Of Queen's University

2017 Issue 3: Science on a small scale

Search form

The techno-ethicist

The techno-ethicist

As machines get smarter and smarter, there is a seductive temptation to give up control to the automated systems that outperform humans.

[illustration]
Should your future autonomous car be equipped with a moral compass along with its GPS? (Illustration by Tine Modeweg-Hansen)

A case in point is the slick, self-driving robot that may soon be rolling into your driveway and taking over your transportation world. Some Amazon and Microsoft tech veterans recently proposed a plan to ban human drivers from a stretch of highway linking Seattle and Vancouver, reserving it for self-driving vehicles. They argued that embracing the technology would save lives and ease congestion, noting that “widespread and universal adoption of autonomous vehicles is inevitable.”

If these tech gurus have their way, future iterations of the self-driving cars now being road-tested by Google, Uber, Tesla, and Mercedes will rule the road and, inevitably, their robot brethren will have free rein over the planet.

Jason Millar, Sc’99 (Engineering Physics), PhD’16 (Philosophy), is a robot ethicist who asks tough questions about how much control we should give up to AI-powered machines that are in many respects more capable than humans. To illustrate the moral dilemmas presented by driverless cars, Dr. Millar created an ethical thought experiment that he published in an academic journal: “Pretend you’re alone in a driverless car on a single-lane road that’s heading into a tunnel. A child suddenly runs across the tunnel’s entrance, trips and falls. You can either hit the child and save yourself or swerve into the tunnel edifice, killing yourself but saving the child.” Of the people who responded online, 64% said they would save themselves, while 36% felt they would sacrifice themselves for the child. Dr. Millar suggests that from an ethical perspective, there is no universal right answer. But in such cases, an individual’s deep moral commitments could make the difference between going straight or swerving.

But this extreme scenario raises another interesting and key question: “Who gets to decide how the car reacts? With self-driving cars, we’re shifting the moral responsibility from a human driver to some programmed algorithm,” says Dr. Millar, a post-doctoral research fellow at the University of Ottawa Faculty of Law and researcher at the Center for Automotive Research at Stanford (CARS).

His hypothetical scenario is not far removed from the real-life programming decisions being made, or considered, by the leading manufacturers of self-driving cars today. Do you want to give up the right and responsibility to make moral decisions in life-and-death situations to the programmers and engineers who design the car, or the corporation that directs them?

Before you answer, take a deep breath and consider this.

At the 2016 Paris Motor Show, the manager of driverless safety for Mercedes-Benz told reporters that its future Level 4 and Level 5 autonomous cars will be programmed to save the driver and the car’s occupants in each and every situation. So, in another hypothetical scenario, if a crash were unavoidable, the self-driving car – a Mercedes deluxe robot – might veer into a crowd of kids waiting for a bus rather than hit a wall or another vehicle, if that choice were less likely to harm the car’s occupants.

There’s capability, and then there’s wisdom

Machines may be super-capable, but do they possess the wisdom to choose what you or other humans believe is the right thing to do? How much scope do you want to give self-driving cars, or other autonomous systems, to act and make decisions on your behalf?

As a techno-ethicist, Dr. Millar believes that consumers, regulators, manufacturers, and especially the engineers who design self-driving cars and other autonomous systems need to think hard about the social and ethical impacts of the new technologies emerging. That becomes even more important with advances in deep-learning AI, which enable robots to succeed in highly complex environments – talking, driving, or serving as a soldier – by learning from their own mistakes and improving their performance over time. Deep-learning robots take a more intuitive approach to solving problems, which results in surprising and unpredictable behaviour that is more human-like and less robotic. So we feel even more pressure and a stronger temptation to relinquish control to autonomous machines.

Dr. Millar argues that creating good technology in the brave new world of deep-learning systems requires that engineers understand and think about the social dimensions of technologies they design. The broader role and ethical responsibility of an engineer is to design solutions that take into account human values and the impact on society: “I use my background in engineering and philosophy to talk about human values and understand how to translate human values into solutions. You’re making good technology when it functions well, is efficient and easy to use, and incorporates a layer of ethics; it takes into account human values and it respects the user’s autonomy and preferences.”

Dr. Millar first started asking questions and thinking about the social impact of technology while working as an engineer in aerospace electronics, after graduating in engineering physics from Queen’s. He was helping design electronics assemblies for commercial airliners one week and guided bomb units the next. “As a young engineer, you’re not expected to investigate the ethical issues and you don’t have the training or the language to figure them out. You don’t necessarily have to think about the application of the technology you’re designing, and you can’t easily say, ‘I’d rather not work on the bomb guidance system,’” says Dr. Millar, who returned to academia to study moral philosophy and apply that knowledge to the intersection of ethics, technology, and society.

In his journal article “Technology as Moral Proxy,” Dr. Millar argues that there is an ethical requirement for engineers to broaden their professional duties to account for the social consequences of their technologies. Because engineers already function intentionally, or unintentionally, as de facto policymakers by introducing new technologies that often have strong social effects – whether they anticipate or think about these effects or not – they should be trained and develop the knowledge to fulfill the duties of a more robust public role.

Learning lessons from health care

Engineers must change and broaden their approach to robot dilemmas with moral consequences, says Dr. Millar, just as health-care practitioners look at end-of-life or other high-risk health decisions with a moral lens, not just a technical one. To such questions, there often isn’t a universal, ethically correct answer. In health care, where moral choices must be made about cancer therapy or high-risk brain surgery, for example, it’s standard practice for medical staff to inform patients of their reasonable treatment options and allow patients to make informed decisions that fit their personal preferences.

“This process is based on the ideas that individuals have the right to make decisions about their own bodies,” he says. Informed consent wasn’t always the standard of practice in health care. “It used to be common for physicians to make important decisions on behalf of patients, often actively deceiving them as part of a treatment plan. Informed consent is now ethically and legally entrenched in health care, such that failing to obtain informed consent exposes a health-care professional to claims of professional negligence,” he says.

[illustration]
Young engineers should be trained to explore the social impact of the technologies they design, says Jason Millar. (Illustration by Tine Modeweg-Hansen)

The embedded ethicist

While doing two clinical bioethics internships at Kingston General Hospital and the Children’s Hospital of Eastern Ontario during his doctoral studies at Queen’s, Dr. Millar saw in-house clinical ethicists in action, educating and consulting with patients, families, and hospital staff on how to make informed decisions on these difficult bioethical issues. Embedding ethicists in technology companies as part of their driverless car design processes is an idea that he has proposed to automotive engineers at international conferences. “Embedded ethicists are a way to train young engineers to identify and think about important ethical and social issues arising out of a new technology, so they can design solutions that make the technology more user-friendly in an ethical way,” says Dr. Millar, noting that environmental engineers now play a vital role in many companies to identify issues of concern and work with a team to develop solutions that lead to sustainable practices.

Jason Millar has applied his engineering and moral philosophy training to help address the issue of Lethal Autonomous Weapon Systems (LAWS) in the global military arena. Currently, military drones – sometimes referred to as Remotely Operated Weapons Systems (ROWS) – are remotely operated: the decision to use lethal force remains a human decision. However, technology is now being developed that would enable military robots to autonomously make the decision to kill without human intervention: “These robots would find a target and decide to kill on their own.” says Dr. Millar. “The question is, do we want to automate the kill decision?”

The link between design and moral psychology

In 2015, Dr. Millar was invited to give testimony at an informal meeting of experts on LAWS at the United Nations in Geneva. His presentation was aimed at exploring some challenges that weapons systems designers could face if they try to design for maintaining meaningful control over semi-autonomous weapons. He presented evidence from the field of moral psychology showing how seemingly unimportant situational factors, such as a noisy environment or sitting at a dirty desk, could significantly affect a person’s decision making. For example, researchers found that if noise levels were low, 80% of people stopped to help an injured man, but when a power lawnmower was running nearby, only 15% did. Dr. Millar explained that if the particular design features of the automated critical functions of the weapons system create an unintended bias in the human operator, this could result in a loss of meaningful human control in the decision to use lethal force. “So, if we are going to design for meaningful human control, in addition to understanding the effects of automating critical functions, we’re going to have to invest time and effort trying to understand the relationship between design features and human moral psychology,” he said.

Through the Open Roboethics initiative (ORi), Dr. Millar and colleagues conducted an international survey of people in more than 50 countries to gauge public opinion on the use of LAWS vs. semi-autonomous ROWS. “The majority of our participants indicated that all types of LAWS should be internationally banned. More than 70% said they would rather their country use remotely operated instead of lethal autonomous systems when waging war. The most popular reason cited for rejecting the development and use of LAWS in battlefields was that humans should always be the ones to make life/death decisions,” he says.

Social failure mode

International law emphasizes the importance of public engagement in such matters. The Martens Clause, included in the additional protocols of the Geneva Convention, indicates that the public should have a meaningful say on what is, and is not, permissible in armed conflict, especially where new technologies are concerned.

[Jason Millar]
Dr. Jason Millar

Similarly, with autonomous, driverless cars, Dr. Millar believes that companies and engineers should engage and seek input from the public on ethical and social issues to inform and influence their design solutions. He uses the concept of “a social failure mode” to identify and describe what happens when robotics or other new technologies ignore or fail to properly address their impact on users and society. A prime example of a social failure mode in technology is Facebook failing to meet users’ expectations of privacy.

After the Mercedes official was quoted as saying the company’s self-driving cars would prioritize the safety of occupants over pedestrians, there was a swift public backlash. Parent company Daimler AG recalibrated and said the official was misquoted: “For Daimler it is clear that neither programmers nor automated systems are entitled to weigh the value of human lives,” said Daimler, noting that it would be illegal in Germany and other countries to make a decision in favour of vehicle occupants and that, as a manufacturer, it would implement what is deemed to be socially acceptable.

Dr. Millar views this kind of public airing of ethical issues as a crucial part of the research and development process. He hopes that through such “social acceptance” R&D processes, engineers, manufacturers, and regulators can address and find solutions to the many ethical concerns about self-driving cars, and build trustworthy vehicles that do what people expect them to do, safely. Choice and informed consent are critical components in the public discussion. This means that everyone, from car users to pedestrians to city planners, both have a meaningful say in, and retain some control over, safety decisions. “You build trust in the technology when you anticipate its effects on users and society,” says Dr. Millar.

Choosing an ethical route for your commute

One way to give people choice and control over their personal moral preferences could be to build reasonable ethical settings into robot cars. Dr. Millar is now doing research at the Center for Automotive Research at Stanford University to explore mapping options for self-driving cars that would allow users to choose their personal ethical preferences. “Does Google always get to decide the best route for you? Your preferred route might be one that uses less fuel or avoids residential streets where there are likely to be small children,” he says.

The role of a robot ethicist is not to block or roll back advances in technology, says Dr. Millar, but to enhance the technology by giving users and society at large meaningful choices and control over its use and effects. “I’m a complete technophile, not a technophobe, and the first to buy the next gadget. I want technology to make the world a better place,” he says. “Engineers can help do this by making technology more user-friendly in an ethical way that respects the user’s autonomy and preferences and takes into account the social impact.”

[cover of Queen's Alumni Review, Issue 2, 2017]