The Ethics of AI and Automated Decisions

The Ethics of AI and Automated Decisions

June 9, 2019 | Author: Kartik Hosanagar, John C. Hower Professor of Technology and Digital Business, The Wharton School of the University of Pennsylvania and author, A Human’s Guide to Machine Intelligence
A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control By KARTIK HOSANAGAR | Hardcover | Mar 12, 2019 | 272 Pages

Computer algorithms touch our lives every day, from how we choose to purchase products (Amazon’s “People who bought this also bought”) to how we choose movies to watch (Netflix’s recommendations) to whom people date or marry (Match.com or Tinder matches). In our imagining, we generally nod politely at recommendations by algorithms and make our own choices. But consider these facts: 80 percent of viewing hours streamed on Netflix originate from automated recommendations. By some estimates, nearly 35 percent of sales at Amazon originate from automated recommendations. And the vast majority of matches on dating apps like Tinder are initiated by algorithms.

With advances in Artificial Intelligence (AI), algorithms are also advancing beyond their original decision-making support role to becoming autonomous systems that make decisions on our behalf. For example, they can invest our savings and will soon drive cars on their own. They are also a part of the workplace – for example, advising insurance agents on how to set premiums and helping recruiters shortlist job applicants. There are very few decisions we make these days that aren’t touched by algorithms.

We tend to think of algorithms as objective decision-makers but they are in fact prone to many of the same biases we associate with humans. A recent example is the use of algorithms in US courtrooms to compute risk scores such as a defendant’s risk of reoffending. These scores are then used by judges, parole, and probation officers to make criminal sentencing, bail, and parole decisions. Recent research shows that these algorithms had a race bias. Other examples include gender biases in resume screening algorithms used by recruiters, social media newsfeed algorithms that promoted fake news stories around elections, failures of autopilot systems in aircrafts and many more.

The biggest cause for concern is not that algorithms have biases; in fact, algorithms are on average less biased than humans. The issue is that we are more susceptible to biases in algorithms than in humans. This is because a biased judge or doctor can affect the lives of a few thousand people but bad code can, and does, affect the lives of millions.

The solution is not to run away from automated decisions and forfeit the significant value they can create for society. However, in an era when corporations are rapidly rolling out advanced artificial intelligence and algorithms are making more and more decisions that affect how we live and work, it is important that we introduce some checks and balances on how algorithms make decisions for or about us. In my book, A Human’s Guide to Machine Intelligence, I have proposed an “Algorithmic Bill of Rights” to protect society. The proposal rests on four main pillars:

  • Data and algorithm transparency: A right to a description of the data used to train algorithms and details as to how that data was collected. A right to an explanation regarding which factors are being considered by the algorithm, and how those factors are being weighted in making a final decision.
  • User control: Some level of user control over the way algorithms work. For example, passengers should be able to intervene when they are dissatisfied with the choices a self-driving car is making.
  • Audit: We have the right to expect that an audit team has conducted reviews of algorithms prior to deployment. The audit team will stress test the algorithm for various failure scenarios and also consider aspects such as algorithm bias, impact on privacy, adversarial attacks, etc.
  • User responsibility: Rights alone are not enough. So I have also included a responsibility, namely the responsibility of being aware of the unanticipated consequences of automated decision-making.

It’s time for consumers, firms, and regulators to take AI and automated decisions seriously and put together a plan that provides protections for consumers and citizens while allowing firms to continue to innovate.

 

Kartik Hosanagar is the John C. Hower Professor of Technology and Digital Business at The Wharton School of the University of Pennsylvania.  Kartik’s research work focuses on the digital economy, in particular the impact of analytics and algorithms on consumers and society. Kartik has been recognized as one of the world’s top 40 business professors under 40. He is a ten-time recipient of MBA or Undergraduate teaching excellence awards at the Wharton School. Kartik co-founded and developed the core IP for Yodle Inc, which was listed by Inc. Magazine among America’s fastest growing private companies prior to its acquisition by Web.com. His new book,  A Human’s Guide to Machine Intelligence, came out March 12, 2019.