AI, Knowledge, and Accountability, Lesson 1


Lesson 1: Opening Debate

Focus: If an AI system makes a mistake, who is responsible: the programmer, the user, or the AI itself?

Suggested length: 1 hour

Learning objectives:

  • Explore different perspectives on the concept of accountability in AI-generated knowledge.
  • Reflect on the ethical and societal implications of AI decision-making.
Critical Thinking ConceptsTOK ConceptsReflection Questions
Confronting Biases and Assumptions: Reflect on how their own experiences and beliefs might skew their judgment.

Exploring Contexts: Explore which stakeholders are impacted by AI decisions.

Responsiveness and Flexibility of Thought: Weigh opposing viewpoints critically to see if, or how, their stance might change.
Responsibility: What does it mean to take responsibility for knowledge production?

Perspectives: How do values, norms, or local regulations shape our perceptions of AI accountability?

Power:  How do existing power structures impact the development and deployment of AI technologies?

What does it mean to be responsible for knowledge creation?

How do different perspectives influence how we assign responsibility?

How should we respond when AI systems fail repeatedly?

Can responsibility be shared or must it lie with one party?

  1. Slides, attached below.
  2. Identifying Biases and Assumptions Checklist, attached below.
  3. Log into Kialo and clone the linked discussion in the main activity to make a copy for your students.
  4. Use your preferred sharing method to share the cloned discussion with your students.

Present the central question and gather students' initial thoughts: "If an AI system makes a mistake, who is responsible: the programmer, the user, or the AI itself?"

Share examples of famous AI failures to prompt students’ responses.

Examples:

Kialo Discussion

Use the Kialo discussion: “If an AI system makes a mistake, who is responsible: the programmer, the user, or the AI itself?

Students will respond to the three theses:

  • The programmer is responsible because they designed the AI.
  • The user is responsible because they decided how to use the AI.
  • The AI is responsible as it made the decision.

Have students add arguments, counterarguments, and examples.

Encourage them to identify their own biases and assumptions, and to recognize and challenge biases in others. Emphasize respectful, constructive dialogue.

In this lesson, all student contributions should be based on their existing knowledge.

Example claims are listed below along with a reasoning prompt for students to explore.

Thesis 1: The programmer is responsible because they designed the AI.

  • Pro claim: The programmer created the algorithms and training data that directly shape how the AI behaves.
  • Counterclaim: Once deployed, the AI may act in unpredictable ways beyond the programmer’s foresight or control, making full responsibility unreasonable.
  • Reasoning: To what extent should programmers be accountable for unintended consequences if their AI behaves in harmful or unexpected ways years after deployment?

Thesis 2: The user is responsible because they decided how to use the AI.

  • Pro claim: The user is responsible because they chose to rely on the AI’s output..
  • Counterclaim: Users often lack the technical understanding of how the AI works, so blaming them for errors they couldn't anticipate is unfair.
  • Reasoning: Should users be required to meet certain standards of understanding before being allowed to use powerful AI tools?

Thesis 3: The AI is responsible as it made the decision.

  • Pro claim: The AI is responsible because it autonomously made the specific choice or prediction that caused the mistake.
  • Counterclaim: AI lacks consciousness and moral agency, so it cannot be meaningfully held accountable like a human can.
  • Reasoning: Can we ever treat AI as morally responsible agents, or is responsibility something only humans can bear?

Discuss the following reflection questions in open discussion or exit ticket format:

 

  • What does it mean to be responsible for knowledge creation?
  • How do different perspectives influence how we assign responsibility?
  • How should we respond when AI systems fail repeatedly?
  • Can responsibility be shared or must it lie with one party?

Related Materials

What are your Feelings