Certainty in Mathematics, Lesson 3
Lesson 3: Listening Task
Focus: Do mathematical models strengthen or weaken trust in mathematics as a way of producing knowledge?
Suggested Length: 1 hour
Learning Objectives:
- Analyze how mathematical models can be used as tools of control rather than objective truth.
- Evaluate how corporations, governments, and institutions exercise power through algorithmic decision-making.
- Reflect on whether responsibility and transparency strengthen or weaken mathematics in practice.
| Critical Thinking Concepts | TOK Concepts | Reflection Questions |
|---|---|---|
| Confronting Biases & Assumptions: Understand how corporations, governments, and universities may design or deploy models in ways that serve profit, efficiency, or control rather than fairness. Responsiveness and Flexibility of Thought: Evaluate perspectives of different stakeholders — mathematicians, policymakers, corporations, regulators, or affected communities. Extrapolation & Reapplication of Principles: Apply these lessons to current issues — e.g., algorithmic bias in hiring, AI-driven credit scoring, predictive policing. | Certainty: What responsibilities do mathematicians, corporations, and governments have when presenting models as objective or certain? Power: How do mathematical models shift power between individuals, corporations, and states? Perspective: How do different cultural and political contexts shape trust in mathematical models? | What role should institutions play in ensuring mathematical models are transparent and accountable? How should educators, policymakers, or citizens prepare people to navigate conflicting or manipulative uses of mathematical models? Is it more dangerous to overtrust mathematical models when misused, or to distrust mathematics completely? |
Resources and Preparation
- Slides, attached below.
- Students can create their own discussion around the central question, or you can clone and use this ready-made example.
- Watch the video Cathy O'Neil | Weapons of Math Destruction before sharing with students.
Introduction
Present the guiding question, "Do mathematical models strengthen or weaken trust in mathematics as a way of producing knowledge?"
Prompt: “Are mathematical models designed to reveal truth or to control outcomes?”
Recap Lessons 1–2:
- Lesson 1: Debate on whether mathematical structures reveal truth or create illusions of certainty.
- Lesson 2: Case studies of contested proofs and models (Gödel, Four Color, 2008 crisis, COVID models).
Link: This lesson moves to the contemporary challenge — how algorithms and big data embed bias while appearing objective.
Main Activity
Listening Task
Students watch the video: Cathy O'Neil | Weapons of Math Destruction. Students should actively map the speaker’s key arguments, counterarguments, and ethical claims about surveillance capitalism and human sciences.
Key Points to Listen For:
- Why do people view mathematical models as authoritative and objective?
- What are the five features of “Weapons of Math Destruction”? (secret, widespread, opaque, biased definitions of success, feedback loops).
- How do algorithms in education (teacher scoring), justice (predictive policing, sentencing), and politics (micro-targeting) function as mathematical “truths” but in fact reinforce inequality?
- Who benefits from these models, and who is harmed?
- Why are mathematical models described as “embedded opinions”?
- What solutions or responsibilities does O’Neil propose?
Note taking Framework:
- Main Arguments:
- Algorithms often claim objectivity but embed bias and reinforce inequality.
- Models are social constructs, not neutral truths.
- Without transparency, mathematical systems undermine democracy and fairness.
- Supporting Examples:
- Teacher value-added model in education (opaque, unaccountable).
- Predictive policing and sentencing models (biased data = biased outcomes).
- Political micro-targeting (efficient for campaigns, destructive for democracy).
- Counterarguments / Critical Questions:
- Are models the problem, or the way humans use them?
- Can regulation (like FOIA or GDPR) truly restrain misuse of models?
- Are algorithms always harmful, or can they sometimes improve fairness (e.g., replacing blatantly biased human decisions)?
- Who holds ultimate responsibility — the mathematicians building models, the institutions deploying them, or the public interpreting them?
Kialo Discussion
In small groups, students create a new Kialo discussion around the guiding question: “Do requirements for transparency and accountability in mathematical models strengthen trust in mathematics as a way of producing knowledge?”
Alternatively, if students need more structure, clone and share this ready-made discussion based on the thesis below, and use the suggested claims as prompts.
Students should use their listening analysis to select the strongest arguments.
They should add these to the Kialo discussion as arguments, counterarguments, examples, and evaluations.
Encourage students to frame their arguments with the TOK concepts of certainty, power and perspective.
Example Claims:
NAME: Do requirements for transparency and accountability in mathematical models strengthen trust in mathematics as a way of producing knowledge?
THESIS: Requirements for transparency and accountability strengthen trust in mathematics.
PRO: Protecting fairness builds trust in mathematics.
- Example: O’Neil shows how opaque teacher evaluation algorithms eroded trust because teachers could not understand or challenge their scores. Transparent models would restore legitimacy.
PRO: Mathematics without accountability risks becoming a tool of control rather than knowledge.
- Example: Predictive policing models embed biased data. This leads to unfair sentencing and reinforces inequality.
PRO: International norms for transparency can prevent misuse in mathematics.
- Example: Political micro-targeting models manipulate voters; stronger responsibility mechanisms could have limited their harmful impact.
CON: Transparency requirements can reduce innovation and efficiency.
- Example: Companies argue that keeping algorithms proprietary enables breakthroughs in efficiency and targeted services. This might be slowed by strict oversight.
CON: Powerful actors will always find ways around ethical safeguards.
- Example: Even with regulations, many algorithms remain opaque because corporations claim intellectual property protection. This makes oversight ineffective.
CON: Conflicting cultural and political perspectives weaken global standards.
- Example: Predictive policing is defended in some contexts as promoting “safety,” while critics see it as reinforcing systemic racism. It’s difficult to say whose definition of fairness should dominate.
Reflection Activity
Discuss the following reflection questions in open discussion or exit ticket format:
- What surprised you most about how mathematical models and algorithms are applied in real life (e.g., teacher evaluations, predictive policing, political targeting)?
- Does this change your trust in mathematics as a way of producing knowledge?
- What role should institutions (like governments, corporations, or regulators) play in ensuring mathematical models are transparent and accountable?
- Can we expect fairness when the same corporations or governments profit from the models they design and enforce?
- How should educators, policymakers, or citizens prepare people to navigate conflicting or manipulative uses of mathematical models?
- Is it more dangerous to overtrust mathematical models when misused, or to distrust mathematics completely?
Do requirements for transparency and accountability in mathematical models strengthen trust in mathematics as a way of producing knowledge? — kialo-edu.com