AI, Knowledge, and Accountability, Lesson 2
Lesson 2: Fact-Finding Exercise
Focus: Who is responsible for real-world AI failures?
Suggested length: 1 hour
Learning objectives:
- Critically analyze real-world AI failures.
- Identify biases, evaluate evidence, and refine arguments about AI accountability.
| Critical Thinking Concepts | TOK Concepts | Reflection Questions |
|---|---|---|
| Confronting Biases and Assumptions: Detect biased or misleading reporting in articles and videos. Exploring Contexts and Expert Opinions: Analyze how various stakeholders might shift blame or responsibility. Responsiveness and Flexibility of Thought: Refine or modify initial arguments about accountability as new information is discovered. | Evidence: How does evidence shape our understanding of accountability in AI systems? Justification: How do the ethical justifications used by companies to implement AI systems hold up in the face of AI failures? Power: In what ways do powerful stakeholders, such as tech companies and governments, shape the narrative around AI failures? | If users are thoroughly informed about an AI tool’s limitations but still rely on it, how does that affect their share of responsibility when errors occur? What biases are evident in these cases? How does evidence shape our understanding of accountability in AI systems? |
Resources and Preparation
- Slides, attached below.
- Students will need access to their Kialo discussions from Lesson 1.
- Ensure students complete the homework preparation task.
- Videos/readings accompanying the case studies of your choice should be viewed in advance.
Homework Preparation Task
Case Study Task
Discussion Prompt: Who is responsible for real-world AI failures?
Divide students into small groups and assign each group a case study related to the topic. Suggestions are listed below. Students will add their evidence to the Kialo discussion from Lesson 1.
Each group will:
- Reflect on how these cases connect to the concepts discussed in Lesson 1.
- Explore their assigned case using the provided resources (articles, videos, or curated primary sources).
- Prepare a short presentation (5–10 minutes).
Case Study Options
Tesla’s Autopilot Failure
- Focus: The real-world consequences of AI-assisted driving and the blurred line between human control and AI autonomy.
- Task: Examine to what extent Tesla users, the company, and the AI system itself are responsible for accidents involving Autopilot.
- Resources:
AI Leadership
- Focus: The responsibilities of AI leaders in anticipating the impact of advanced AI systems, such as AGI.
- Task: Evaluate the ethical responsibilities of AI CEOs and developers in preparing for possible future failures of high-stakes AI systems.
- Resource:
User Responsibility
- Focus: Platforms are holding users liable when AI tools generate offensive or false content on social platforms.
- Task: Analyze whether it is fair for users to bear responsibility for AI-generated content, especially when they may not fully understand or control how the tools work.
- Resource:
Medical Note-Taking
- Focus: Doctors are increasingly relying on AI tools to transcribe and summarize patient consultations, aiming to reduce administrative burden and burnout.
- Task: Examine who should be held accountable if an AI note-taking tool introduces an error that leads to a medical misdiagnosis or improper treatment.
- Resource:
Introduction
Recap Lesson 1 by reviewing key arguments from the debate on AI responsibility.
Present the task’s central question: "Who should be held responsible when AI systems fail?"
Discussion questions:
- Who should be held accountable when AI systems fail — programmers, users, or the AI itself?
- How do different contexts (medical, legal, transportation, social media) affect judgments about responsibility?
- Can an AI system ever be morally or legally responsible?
Explain that in today's lesson, students will investigate real-world examples of AI systems and failures to explore how responsibility is constructed, challenged, and distributed among human and non-human agents.
Main Activity
Presentations
Students present their case studies to the class.
Students should take note of any useful points from other groups’ presentations to use in the Kialo discussion.
Recording Findings in a Kialo Discussion
Students use their case study and their peers’ presentations to update and substantiate their arguments in their Kialo discussion from the previous session, focusing on:
- Accountability in AI: Who is ultimately responsible when AI systems make mistakes?
- Power and ethics: Who has the authority to define responsible AI use — governments, companies, or individuals?
- Human vs machine agency: Can an autonomous system be meaningfully "responsible"?
- Bias and justification: Are AI errors more forgivable or less forgivable than human ones, and why?
Reflection Activity
Discuss the following reflection questions in open discussion or exit ticket format:
- If users are thoroughly informed about an AI tool’s limitations but still rely on it, how does that affect their share of responsibility when errors occur?
- What biases are evident in these cases?
- How does evidence shape our understanding of accountability in AI systems?