We live in a world fundamentally transformed by our own creations. Once imagined only in science fiction, artificial intelligence now powers much of the technology we interact with every day—from smart home devices to cognitive assistants to media recommenders. While subtle by design, the impact of AI is far-reaching.
There is likely to be no field or industry untouched by AI before long.
The field of education is no less affected by these technologies. AI shows up in instructional chatbots, personalized learning systems and administrative tools. Continuing on this trajectory, there is likely to be no field or industry untouched by AI before long. And with this change comes a host of new questions—concerns about the ethical design and implementation of these new tools.
In K-12 education, a focus on ethical considerations is of critical importance. Many teachers and education leaders select and use AI-powered tools despite little background in computer science or artificial intelligence. Tools like Turnitin that check for plagiarism, intelligent tutoring softwares like Khan Academy or iReady that automate or personalize instruction, and chatbots like Alexa that answer student questions are all vulnerable to algorithmic biases in development and inequitable outcomes in implementation. Moreover, since effective AI solutions require large amounts of information, maintaining student data privacy is an ongoing challenge.
Furthermore, educators are not the only ones using AI technologies. Students, as consumers and users of AI tools themselves, need a foundational education on what AI is and how it works. Educators’ ethical questions around AI education must start by ensuring equitable access to this learning for all students—across subject areas, grade levels and demographic backgrounds. Then, this education must go beyond simple explanations of how the technology works to include corresponding ethical questions and impacts on society.
This need is highlighted in the Digital Citizen standard of the ISTE Standards for Students, which asks that “students engage in positive, safe, legal and ethical behavior when using technology.” In light of research and news stories outlining negative impacts of AI technologies, students need this education to make positive and ethical decisions about using—and someday possibly developing—AI-powered technologies like facial recognition, social media platforms and cognitive assistants. As creators of our shared future, today’s students must consider real-world examples of ethical dilemmas and imagine pathways to better outcomes.
AI Explorations and Their Practical Use in School Environments—an ISTE initiative funded by General Motors—aims to support educators and students in doing just this.
As creators of our shared future, today’s students must consider real-world examples of ethical dilemmas and imagine pathways to better outcomes.
Through professional learning opportunities for educators, the program is designed to address inequities for traditionally underrepresented populations in STEM fields and prepare today’s students for tomorrow’s AI careers. So far, over one thousand educators and education leaders have participated in the program’s online courses, webinars and professional learning network.
In 2020, the AI Explorations program released a four-volume series of guides for elementary, secondary, elective and computer science teachers—Hands-On AI Projects for the Classroom. Available for free in English, Spanish and Arabic, these guides provide background resources, scaffolded interactive activities and related extensions that can be used by teachers across grades and content areas to teach about the development, application and impact of AI technologies.
This year, ISTE has added a new volume to the series—Hands-On AI Projects for the Classroom: A Guide on Ethics and AI. While the original guides addressed some aspects of bias and societal impacts throughout each of the projects, this addition provides a strategic examination of AI through developmentally appropriate ethical lenses across K-12. Teachers have often addressed ethical questions in the classroom through character-based civic education in the past, but the nature of today’s technologies prompts us to consider much more than our decision-making processes. In fact, since AI-powered tools often sway our personal decisions through recommendations and nudges in ways that we don’t even realize, our own behaviors are dependent on the ethical design and development of AI tools.
The ethics and AI guide supports teachers in engaging elementary students about fairness, autonomy and the nature of good and bad technology use. Similarly, the guide supports secondary teachers as they dig deeper to explore ethical lenses, gray areas, diverse stakeholders, accountability and even policymaking around AI. The guide does not provide ethical answers, nor does it ask teachers to instill their own ethical frameworks or values. Instead, the four included projects teach students to ponder ethical questions and weigh various outcomes—skills they can take with them throughout their lives.
Mark Gerl, a technology teacher at The Galloway School and a participant in the AI Explorations program, has put a lot of thought into the ethical implications of using and teaching about AI. While collaborating with the guide authors to develop two projects, Gerl observed, “The more I think about it, all of technology has been a series of trade-offs. A sword is better than a pointy stick, but you have to be stronger to lift it, and it requires forging, sharpening, cleaning, etc. Too often, we just see the benefit but rarely stop to think of what we are either giving up or passing over when we make those choices, especially in technology fields.” He sees the examination of ethical questions and societal impacts as a crucial part of any technology instruction.
We all have a shared responsibility to ensure that AI is used and taught about in ethical and equitable ways, in education and beyond.
The guide intentionally provides supporting resources for educators and a variety of activities and discussion questions to foster deeper inquiry and understanding. For example, the concept of technology trade-offs is woven throughout the ethics and AI projects, encouraging students to consider the privacy, freedoms or civil rights that might be sacrificed in the name of efficiency, personalization or convenience. In fact, students examine relevant real-world examples like the impact of recommender systems on reinforcing stereotypes or the effects of AI automation on jobs through virtual simulations, videos, experimentation and other engaging activities.
Of course, teachers and students are not expected to become ethicists simply by using this guide or teaching any single project or unit on AI and ethical questions. Nevertheless, the AI Explorations team believes that the more often teachers and students discuss these issues, the better we will all become at it. We all have a shared responsibility to ensure that AI is used and taught about in ethical and equitable ways, in education and beyond. This guide is one more tool to help us accomplish that goal.