{"id":250,"date":"2025-07-11T01:52:58","date_gmt":"2025-07-11T01:52:58","guid":{"rendered":"https:\/\/resources.kialo-edu.com\/?post_type=docs&#038;p=250"},"modified":"2025-07-11T12:50:52","modified_gmt":"2025-07-11T12:50:52","password":"","slug":"knowledge-and-technology-ai-knowledge-and-accountability-2","status":"publish","type":"docs","link":"https:\/\/resources.kialo-edu.com\/en\/docs\/knowledge-and-technology-ai-knowledge-and-accountability-2\/","title":{"rendered":"AI, Knowledge, and Accountability, Lesson 2"},"content":{"rendered":"<h2 class=\"wp-block-heading\">Lesson 2: Fact-Finding Exercise<\/h2><p><strong>Focus: <\/strong><em>Who is responsible for real-world AI failures?<\/em><\/p><p>Suggested length: 1 hour<\/p><p>Learning objectives:<\/p><ul class=\"wp-block-list\">\n<li>Critically analyze real-world AI failures.<\/li>\n\n\n\n<li>Identify biases, evaluate evidence, and refine arguments about AI accountability.<\/li>\n<\/ul><figure class=\"wp-block-table\"><table class=\"has-background has-fixed-layout\" style=\"background-color:#e9f1fb\"><thead><tr><th>Critical Thinking Concepts<\/th><th>TOK Concepts<\/th><th>Reflection Questions<\/th><\/tr><\/thead><tbody><tr><td><strong>Confronting Biases and Assumptions:<\/strong> Detect biased or misleading reporting in articles and videos.<br><br><strong>Exploring Contexts and Expert Opinions:<\/strong> Analyze how various stakeholders might shift blame or responsibility.<br><br><strong>Responsiveness and Flexibility of Thought:<\/strong> Refine or modify initial arguments about accountability as new information is discovered.<\/td><td><strong>Evidence:<\/strong> How does evidence shape our understanding of accountability in AI systems?<br><br><strong>Justification: <\/strong>How do the ethical justifications used by companies to implement AI systems hold up in the face of AI failures?<br><br><strong>Power: <\/strong>&nbsp;In what ways do powerful stakeholders, such as tech companies and governments, shape the narrative around AI failures?<br><\/td><td>If users are thoroughly informed about an AI tool&rsquo;s limitations but still rely on it, how does that affect their share of responsibility when errors occur?<br><br>What biases are evident in these cases?<br><br>How does evidence shape our understanding of accountability in AI systems?<br><\/td><\/tr><\/tbody><\/table><\/figure><style>#sp-ea-259 .spcollapsing { height: 0; overflow: hidden; transition-property: height;transition-duration: 300ms;}#sp-ea-259.sp-easy-accordion>.sp-ea-single {margin-bottom: 10px; border: 1px solid #e2e2e2; }#sp-ea-259.sp-easy-accordion>.sp-ea-single>.ea-header a {color: #444;}#sp-ea-259.sp-easy-accordion>.sp-ea-single>.sp-collapse>.ea-body {background: #fff; color: #444;}#sp-ea-259.sp-easy-accordion>.sp-ea-single {background: #eee;}#sp-ea-259.sp-easy-accordion>.sp-ea-single>.ea-header a .ea-expand-icon { float: left; color: #444;font-size: 16px;}<\/style><div id=\"sp_easy_accordion-1750658480\"><div id=\"sp-ea-259\" class=\"sp-ea-one sp-easy-accordion\" data-ea-active=\"ea-click\" data-ea-mode=\"vertical\" data-preloader=\"\" data-scroll-active-item=\"\" data-offset-to-scroll=\"0\"><div class=\"ea-card sp-ea-single\"><h3 class=\"ea-header\"><a class=\"collapsed\" id=\"ea-header-2590\" role=\"button\" data-sptoggle=\"spcollapse\" data-sptarget=\"#collapse2590\" aria-controls=\"collapse2590\" href=\"#\" aria-expanded=\"false\" tabindex=\"0\"><i aria-hidden=\"true\" role=\"presentation\" class=\"ea-expand-icon eap-icon-ea-expand-plus\"><\/i> Resources and Preparation<\/a><\/h3><div class=\"sp-collapse spcollapse spcollapse\" id=\"collapse2590\" role=\"region\" aria-labelledby=\"ea-header-2590\"> <div class=\"ea-body\"><ol><li>Slides, attached below.<\/li><li>Students will need access to their Kialo discussions from Lesson 1.<\/li><li>Ensure students complete the homework preparation task.<\/li><li>Videos\/readings accompanying the case studies of your choice should be viewed in advance.<\/li><\/ol><\/div><\/div><\/div><div class=\"ea-card sp-ea-single\"><h3 class=\"ea-header\"><a class=\"collapsed\" id=\"ea-header-2591\" role=\"button\" data-sptoggle=\"spcollapse\" data-sptarget=\"#collapse2591\" aria-controls=\"collapse2591\" href=\"#\" aria-expanded=\"false\" tabindex=\"0\"><i aria-hidden=\"true\" role=\"presentation\" class=\"ea-expand-icon eap-icon-ea-expand-plus\"><\/i> Homework Preparation Task<\/a><\/h3><div class=\"sp-collapse spcollapse spcollapse\" id=\"collapse2591\" role=\"region\" aria-labelledby=\"ea-header-2591\"> <div class=\"ea-body\"><p><strong>Case Study Task<\/strong><\/p><p>Discussion Prompt: Who is responsible for real-world AI failures?<\/p><p>Divide students into small groups and assign each group a case study related to the topic. Suggestions are listed below. Students will add their evidence to the Kialo discussion from Lesson 1.<\/p><p>Each group will:<\/p><ul><li>Reflect on how these cases connect to the concepts discussed in Lesson 1.<\/li><li>Explore their assigned case using the provided resources (articles, videos, or curated primary sources).<\/li><li>Prepare a short presentation (5&ndash;10 minutes).<\/li><\/ul><p><strong>Case Study Options<\/strong><\/p><p><strong>Tesla&rsquo;s Autopilot Failure<\/strong><\/p><ul><li><strong>Focus:<\/strong> The real-world consequences of AI-assisted driving and the blurred line between human control and AI autonomy.<\/li><li><strong>Task:<\/strong> Examine to what extent Tesla users, the company, and the AI system itself are responsible for accidents involving Autopilot.<\/li><li><strong>Resources:<\/strong><ul><li><a href=\"https:\/\/www.theverge.com\/2024\/4\/26\/24141361\/tesla-autopilot-fsd-nhtsa-investigation-report-crash-death\" target=\"_blank\" rel=\"noopener\">Tesla&rsquo;s Autopilot and Full Self-Driving linked to hundreds of crashes, dozens of deaths - The Verge<\/a><\/li><li><a href=\"https:\/\/www.nbcnews.com\/news\/us-news\/tesla-ceo-elon-musk-responds-texas-crash-amid-probe-two-n1264623\" target=\"_blank\" rel=\"noopener\">Tesla CEO Elon Musk responds to Texas crash in investigation into two deaths<\/a><\/li><\/ul><\/li><\/ul><p><strong>AI Leadership<\/strong><\/p><ul><li><strong>Focus:<\/strong> The responsibilities of AI leaders in anticipating the impact of advanced AI systems, such as AGI.<\/li><li><strong>Task:<\/strong> Evaluate the ethical responsibilities of AI CEOs and developers in preparing for possible future failures of high-stakes AI systems.<\/li><li><strong>Resource:<\/strong><ul><li><a href=\"https:\/\/time.com\/7205596\/sam-altman-superintelligence-agi\/\" target=\"_blank\" rel=\"noopener\">How OpenAI&rsquo;s Sam Altman Is Thinking About AGI and Superintelligence in 2025 | TIME<\/a><\/li><\/ul><\/li><\/ul><p><strong>User Responsibility<\/strong><\/p><ul><li><strong>Focus:<\/strong> Platforms are holding users liable when AI tools generate offensive or false content on social platforms.<\/li><li><strong>Task:<\/strong> Analyze whether it is fair for users to bear responsibility for AI-generated content, especially when they may not fully understand or control how the tools work.<\/li><li><strong>Resource:<\/strong><ul><li><a href=\"https:\/\/english.elpais.com\/technology\/2024-12-29\/warning-if-ai-social-media-tools-make-a-mistake-youre-responsible.html\" target=\"_blank\" rel=\"noopener\">Warning: If AI social media tools make a mistake, you&rsquo;re responsible | Technology | EL PA&Iacute;S English<\/a><\/li><\/ul><\/li><\/ul><p><strong>Medical Note-Taking<\/strong><\/p><ul><li><strong>Focus:<\/strong> Doctors are increasingly relying on AI tools to transcribe and summarize patient consultations, aiming to reduce administrative burden and burnout.<\/li><li><strong>Task:<\/strong> Examine who should be held accountable if an AI note-taking tool introduces an error that leads to a medical misdiagnosis or improper treatment.<\/li><li><strong>Resource:<\/strong><ul><li><a href=\"https:\/\/www.msn.com\/en-us\/money\/markets\/doctors-turn-to-ai-for-easier-medical-note-taking\/ar-AA1wZ5KR?ocid=BingNewsVerp\" target=\"_blank\" rel=\"noopener\">Doctors turn to AI for easier medical note-taking<\/a><\/li><\/ul><\/li><\/ul><\/div><\/div><\/div><div class=\"ea-card sp-ea-single\"><h3 class=\"ea-header\"><a class=\"collapsed\" id=\"ea-header-2592\" role=\"button\" data-sptoggle=\"spcollapse\" data-sptarget=\"#collapse2592\" aria-controls=\"collapse2592\" href=\"#\" aria-expanded=\"false\" tabindex=\"0\"><i aria-hidden=\"true\" role=\"presentation\" class=\"ea-expand-icon eap-icon-ea-expand-plus\"><\/i> Introduction<\/a><\/h3><div class=\"sp-collapse spcollapse spcollapse\" id=\"collapse2592\" role=\"region\" aria-labelledby=\"ea-header-2592\"> <div class=\"ea-body\"><p>Recap Lesson 1 by reviewing key arguments from the debate on AI responsibility.<\/p><p>Present the task&rsquo;s central question: <em>\"Who should be held responsible when AI systems fail?\"<\/em><\/p><p>Discussion questions:<\/p><ul><li>Who should be held accountable when AI systems fail &mdash; programmers, users, or the AI itself?<\/li><li>How do different contexts (medical, legal, transportation, social media) affect judgments about responsibility?<\/li><li>Can an AI system ever be morally or legally responsible?<\/li><\/ul><p>Explain that in today's lesson, students will investigate real-world examples of AI systems and failures to explore how responsibility is constructed, challenged, and distributed among human and non-human agents.<\/p><\/div><\/div><\/div><div class=\"ea-card sp-ea-single\"><h3 class=\"ea-header\"><a class=\"collapsed\" id=\"ea-header-2593\" role=\"button\" data-sptoggle=\"spcollapse\" data-sptarget=\"#collapse2593\" aria-controls=\"collapse2593\" href=\"#\" aria-expanded=\"false\" tabindex=\"0\"><i aria-hidden=\"true\" role=\"presentation\" class=\"ea-expand-icon eap-icon-ea-expand-plus\"><\/i> Main Activity<\/a><\/h3><div class=\"sp-collapse spcollapse spcollapse\" id=\"collapse2593\" role=\"region\" aria-labelledby=\"ea-header-2593\"> <div class=\"ea-body\"><p><strong>Presentations&nbsp;<\/strong><\/p><p>Students present their case studies to the class.<\/p><p>Students should take note of any useful points from other groups&rsquo; presentations to use in the Kialo discussion.<\/p><p><strong>Recording Findings in a Kialo Discussion&nbsp;<\/strong><\/p><p>Students use their case study and their peers&rsquo; presentations to update and substantiate their arguments in their Kialo discussion from the previous session, focusing on:<\/p><p>&nbsp;<\/p><ul><li>Accountability in AI: Who is ultimately responsible when AI systems make mistakes?<\/li><li>Power and ethics: Who has the authority to define responsible AI use &mdash; governments, companies, or individuals?<\/li><li>Human vs machine agency: Can an autonomous system be meaningfully \"responsible\"?<\/li><li>Bias and justification: Are AI errors more forgivable or less forgivable than human ones, and why?<\/li><\/ul><\/div><\/div><\/div><div class=\"ea-card sp-ea-single\"><h3 class=\"ea-header\"><a class=\"collapsed\" id=\"ea-header-2594\" role=\"button\" data-sptoggle=\"spcollapse\" data-sptarget=\"#collapse2594\" aria-controls=\"collapse2594\" href=\"#\" aria-expanded=\"false\" tabindex=\"0\"><i aria-hidden=\"true\" role=\"presentation\" class=\"ea-expand-icon eap-icon-ea-expand-plus\"><\/i> Reflection Activity<\/a><\/h3><div class=\"sp-collapse spcollapse spcollapse\" id=\"collapse2594\" role=\"region\" aria-labelledby=\"ea-header-2594\"> <div class=\"ea-body\"><p>Discuss the following reflection questions in open discussion or exit ticket format:<\/p><p><\/p><ul><li>If users are thoroughly informed about an AI tool&rsquo;s limitations but still rely on it, how does that affect their share of responsibility when errors occur?<\/li><li>What biases are evident in these cases?<\/li><li>How does evidence shape our understanding of accountability in AI systems?<\/li><\/ul><\/div><\/div><\/div><\/div><\/div><h2 class=\"wp-block-heading\">Related Materials<\/h2><div data-height=\"auto\">\n\t\t\t<p>\n\t\t\t\t<strong>\n\t\t\t\t\t<a href=\"https:\/\/www.kialo-edu.com\/p\/182f1014-0826-4268-b18e-e1a425faa06b\/272605\" referrerpolicy=\"unsafe-url\" rel=\"nofollow\">If an AI system makes a mistake, who is responsible: the programmer, the user, or the AI itself?<\/a>\n\t\t\t\t<\/strong> &mdash; <a href=\"https:\/\/www.kialo-edu.com\" referrerpolicy=\"unsafe-url\">kialo-edu.com<\/a>\n\t\t\t<\/p>\n\t\t\t<script async=\"\" src=\"https:\/\/www.kialo-edu.com\/assets\/static\/js\/embedded-kialo.min.js\" charset=\"utf-8\"><\/script>\n\t\t<\/div><div class=\"embedpress-gutenberg-wrapper source-provider-GoogleDocs aligncenter clear   ep-content-protection-disabled inline\" id=\"874172ed-3919-499a-b62a-cf83930a3c11\" data-embed-type=\"GoogleDocs \">\n            <div class=\"wp-block-embed__wrapper \">\n                <div id=\"ep-gutenberg-content-aaa089d2173515f8805a6f2470db59de\" class=\"ep-gutenberg-content\">\n                    <div>\n                        <div class=\"ep-embed-content-wraper preset-default insta-grid ep-google-photos-carousel\">\n\n                            <div class=\"ose-google-docs ose-uid-5b0b0ebd6b1391578ec2a17b41127074 ose-embedpress-responsive\" style=\"width:600px; height:600px; max-height:600px; max-width:100%; display:inline-block;\"><iframe loading=\"lazy\" allowfullscreen=\"true\" src=\"https:\/\/docs.google.com\/presentation\/d\/e\/2PACX-1vQWiy1dhDIZ5FB5PLD_R-mk9V3AmaOjdKJUMWkqB0AksKdPkk7QA3ZyaNj2VwoS0tdMYbh_w3pIY-SA\/embed?start=false&amp;loop=false&amp;delayms=3000\" frameborder=\"0\" width=\"600\" height=\"600\" mozallowfullscreen=\"true\" webkitallowfullscreen=\"true\" title=\"AI, Knowledge, and Accountability, Lesson 2\"><\/iframe><\/div>                        <\/div>\n\n                                            <\/div>\n                <\/div>\n            <\/div>\n        <\/div>\n","protected":false},"excerpt":{"rendered":"<p>Lesson 2: Fact-Finding ExerciseFocus: Who is responsible for real-world AI failures?Suggested length: 1 hourLearning objectives: Critical Thinking Concepts TOK Concepts Reflection Questions Confronting Biases and Assumptions: Detect biased or misleading reporting in articles and videos. Exploring Contexts and Expert Opinions: Analyze how various stakeholders might shift blame or responsibility. Responsiveness and Flexibility of Thought: Refine [&hellip;]<\/p>\n","protected":false},"author":33,"featured_media":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"_acf_changed":true,"wds_primary_doc_category":0,"wds_primary_doc_tag":0,"footnotes":""},"doc_category":[26],"doc_tag":[],"class_list":["post-250","docs","type-docs","status-publish","hentry","doc_category-knowledge-tech"],"acf":[],"year_month":"2026-05","word_count":182,"total_views":"14","reactions":{"happy":"1","normal":"0","sad":"0"},"author_info":{"name":"Louise","author_nicename":"louise","author_url":"https:\/\/resources.kialo-edu.com\/en\/author\/louise\/"},"doc_category_info":[{"term_name":"Knowledge and Technology","term_url":"https:\/\/resources.kialo-edu.com\/en\/docs-category\/knowledge-tech\/"}],"doc_tag_info":[],"knowledge_base_info":[],"knowledge_base_slug":[],"_links":{"self":[{"href":"https:\/\/resources.kialo-edu.com\/en\/wp-json\/wp\/v2\/docs\/250","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/resources.kialo-edu.com\/en\/wp-json\/wp\/v2\/docs"}],"about":[{"href":"https:\/\/resources.kialo-edu.com\/en\/wp-json\/wp\/v2\/types\/docs"}],"author":[{"embeddable":true,"href":"https:\/\/resources.kialo-edu.com\/en\/wp-json\/wp\/v2\/users\/33"}],"replies":[{"embeddable":true,"href":"https:\/\/resources.kialo-edu.com\/en\/wp-json\/wp\/v2\/comments?post=250"}],"version-history":[{"count":0,"href":"https:\/\/resources.kialo-edu.com\/en\/wp-json\/wp\/v2\/docs\/250\/revisions"}],"wp:attachment":[{"href":"https:\/\/resources.kialo-edu.com\/en\/wp-json\/wp\/v2\/media?parent=250"}],"wp:term":[{"taxonomy":"doc_category","embeddable":true,"href":"https:\/\/resources.kialo-edu.com\/en\/wp-json\/wp\/v2\/doc_category?post=250"},{"taxonomy":"doc_tag","embeddable":true,"href":"https:\/\/resources.kialo-edu.com\/en\/wp-json\/wp\/v2\/doc_tag?post=250"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}