23. The Ethics of Uncertainty and Interpretability in Human-AI Systems#
Context: When considering AI systems in sociotechnical contexts, it’s important to consider the ways in which AI will be used—that is, the human-AI collaboration. So far, we’ve hinted that both interpretability and uncertainty are important for building ethical, fair, and trustworthy systems.
Challenge: As we will see, human-AI collaboration doesn’t always work as intended. As a result, it’s important to devote as much to studying human-AI collaboration as to the development of AI systems alone.
Outline:
What do we need for effective Human-AI collaborations?
Challenges with current Human-AI collaboration practices
23.1. What do we need for effective Human-AI collaborations?#
Exercise: Effective Human-AI Collaboration
Part 1: Read, Clinical AI tools must convey predictive uncertainty for each individual patient.
Why does the author advocate for uncertainty?
How does the author envision using uncertainty (epistemic, aleatoric, and conformal) in clinical settings?
Do you anticipate any challenges incorporating uncertainty as they envision? If so, what are they? If not, why not?
Part 2: Read, How Should Clinicians Communicate With Patients About the Roles of Artificially Intelligent Team Members?
What are the challenges highlighted by the authors in using AI in their context?
Are there any additional challenges you foresee?
23.2. Challenges with current Human-AI collaboration practices#
Exercise: Challenges in Human-AI Collaboration
Part 1: Read, Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment.
Why do the authors argue interpretability is important for ensuring fairness?
What are the different notions of fairness discussed in the paper?
What did the authors find?
If the type of explanation influences a user’s perception of fairness, are there other aspects of the user’s decision-making process that might be influenced? What are they?
Given that explanations have such influence over the users, how can we ensure our explanations are informative and not manipulative?
Note: This paper focuses on recidivism prediction. There are many criticisms of the use of AI in this domain that are important to consider. If you’re interested in learning more, the following papers are a good place to start:
Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment
Beyond Bias: Re-Imagining the Terms of ‘Ethical AI’ in Criminal Law
Part 2: From Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction, read “Introduction,” “Robots on the road,” and “Conclusion.”
What were the sources of failure in the self-driving Uber car accident?
In the accident, who ended up being held responsible? Do you agree with this? Why or why not?
Do you think that if the self-driving AI had some notion of uncertainty, would it have helped avoid the accident? If so, in what ways? If not, why not?
What’s a “moral crumple zone”? Instantiate this using the Google incidents from 2015-2017.
What are the author’s criticisms of the human-in-the-loop paradigm?
Part 3: Explainability and uncertainty are both tools for building human-in-the-loop systems—systems that incorporate AI into human decision-making. In your opinion, are explainability and uncertainty just shifting the responsibility from the AI developers to the AI users? Why or why not?