AI, Knowledge, and Accountability, Lesson 2


Lesson 2: Fact-Finding Exercise

Focus: Who is responsible for real-world AI failures?

Suggested length: 1 hour

Learning objectives:

  • Critically analyze real-world AI failures.
  • Identify biases, evaluate evidence, and refine arguments about AI accountability.
Critical Thinking ConceptsTOK ConceptsReflection Questions
Confronting Biases and Assumptions: Detect biased or misleading reporting in articles and videos.

Exploring Contexts and Expert Opinions: Analyze how various stakeholders might shift blame or responsibility.

Responsiveness and Flexibility of Thought: Refine or modify initial arguments about accountability as new information is discovered.
Evidence: How does evidence shape our understanding of accountability in AI systems?

Justification: How do the ethical justifications used by companies to implement AI systems hold up in the face of AI failures?

Power:  In what ways do powerful stakeholders, such as tech companies and governments, shape the narrative around AI failures?
If users are thoroughly informed about an AI tool’s limitations but still rely on it, how does that affect their share of responsibility when errors occur?

What biases are evident in these cases?

How does evidence shape our understanding of accountability in AI systems?

  1. Slides, attached below.
  2. Students will need access to their Kialo discussions from Lesson 1.
  3. Ensure students complete the homework preparation task.
  4. Videos/readings accompanying the case studies of your choice should be viewed in advance.

Case Study Task

Discussion Prompt: Who is responsible for real-world AI failures?

Divide students into small groups and assign each group a case study related to the topic. Suggestions are listed below. Students will add their evidence to the Kialo discussion from Lesson 1.

Each group will:

  • Reflect on how these cases connect to the concepts discussed in Lesson 1.
  • Explore their assigned case using the provided resources (articles, videos, or curated primary sources).
  • Prepare a short presentation (5–10 minutes).

Case Study Options

Tesla’s Autopilot Failure

AI Leadership

User Responsibility

Medical Note-Taking

  • Focus: Doctors are increasingly relying on AI tools to transcribe and summarize patient consultations, aiming to reduce administrative burden and burnout.
  • Task: Examine who should be held accountable if an AI note-taking tool introduces an error that leads to a medical misdiagnosis or improper treatment.
  • Resource:

Recap Lesson 1 by reviewing key arguments from the debate on AI responsibility.

Present the task’s central question: "Who should be held responsible when AI systems fail?"

Discussion questions:

  • Who should be held accountable when AI systems fail — programmers, users, or the AI itself?
  • How do different contexts (medical, legal, transportation, social media) affect judgments about responsibility?
  • Can an AI system ever be morally or legally responsible?

Explain that in today's lesson, students will investigate real-world examples of AI systems and failures to explore how responsibility is constructed, challenged, and distributed among human and non-human agents.

Presentations 

Students present their case studies to the class.

Students should take note of any useful points from other groups’ presentations to use in the Kialo discussion.

Recording Findings in a Kialo Discussion 

Students use their case study and their peers’ presentations to update and substantiate their arguments in their Kialo discussion from the previous session, focusing on:

 

  • Accountability in AI: Who is ultimately responsible when AI systems make mistakes?
  • Power and ethics: Who has the authority to define responsible AI use — governments, companies, or individuals?
  • Human vs machine agency: Can an autonomous system be meaningfully "responsible"?
  • Bias and justification: Are AI errors more forgivable or less forgivable than human ones, and why?

Discuss the following reflection questions in open discussion or exit ticket format:

  • If users are thoroughly informed about an AI tool’s limitations but still rely on it, how does that affect their share of responsibility when errors occur?
  • What biases are evident in these cases?
  • How does evidence shape our understanding of accountability in AI systems?

Related Materials

What are your Feelings