Artificial intelligence is rampant in schools, but education professionals aren’t teaching students the importance of ethical utilization. The problem is that it’s a complex topic, and educators simply don’t have the time or resources to tackle it.
Where do they get the relevant materials? How do they fit it into their existing lesson plans? Here’s how questions like these are holding AI ethics classes back — and what decision-makers are doing to fix the problem.
AI ethics is a set of principles that govern the development, deployment and utilization of algorithm-based tools. They are meant to guide stakeholders, minimizing the damage such technologies can do to the environment, society or the economy. On a smaller scale, they focus on preventing bias, discrimination, misinformation and abuse.
Teaching AI ethics in school may seem unnecessary, but its importance cannot be overstated. AI has already become a staple in many industries. According to a McKinsey & Company report,
Avoiding talking about AI in the classroom because of its newness needlessly repeats history. It is the same as years ago when students were told they wouldn’t always have a calculator in their back pocket right after the smartphone was invented.
This technology’s popularity is growing in both personal and professional domains. Students must be prepared to use it responsibly when entering the workforce to ensure better outcomes for their industry and society.
AI is powerful. Advanced algorithms can analyze massive datasets exponentially faster than humans. They excel at pattern recognition, trend prediction and content creation. Whether educators like it or not, AI will likely change the world. If they prepare their students accordingly, they can ensure that change is positive.
Typical ethical risks students face when using AI include technology abuse, harassment and plagiarism. The latter is one of the most common. Thanks to generative tools, academic dishonesty is easy to get away with. Google Trends reveals that searches for “AI essay writing”
Unfortunately, plagiarism isn’t the most severe offense. Lately, university, high school and even middle school students have been caught using generative tools to take part in deepfake-related sex crimes. In South Korea, one journalist
Even if kids aren’t using regular photos of their classmates to create explicit material, AI use still revolves around consent. This concept revisits data privacy and security principles — all data end users enter is no longer private. The platform or service provider may use it to train their model or enhance their marketing strategies.
Many of today’s students view generative and machine learning models as entertaining, harmless tools — even when they are actively harming others. Teachers have a unique opportunity to educate them early on about the consequences of such actions, stopping unacceptable behavior like this at the source.
School board standards, classroom policies and state laws vary dramatically, so AI adoption has been erratic at best. This split between those who embrace it and those who don’t has stalled standardization. However, some are still taking a chance on ethics courses.
While various academics and industry experts argue ethics discussions should start in elementary school, relatively few institutions even use algorithm-based tools that early. There are virtually no public resources available for grade school administrators.
Many middle and high schools aren’t discussing the intricacies of ethical AI. Instead, they are cracking down on AI use. Studies show that the accuracy of AI detection tools
Promisingly, some decision-makers have seen the need for AI ethics. For instance, California Governor Gavin Newsom
Not all educators receive playbooks from experts. Some, like Jeff Simon, a mathematics teacher at Sage Creek High School, are navigating this new field on their own. Simon said he is
Most AI ethics classes are offered at higher education institutions because working those discussions into data science or computer engineering courses is easier. At the very least, many have developed policies for generative AI use.
AI is not just present in education — it is thriving. For reference, in 2024,
Time is one of the main constraints — educators face the impossible task of doing more with less. They already
The field of AI is complex and rapidly evolving. Even with district-wide support, educational materials may quickly become outdated. Many schools still use decades-old textbooks to teach, so any solution would require extensive retrofitting.
Moreover, how do they send home ethics homework? What would tests look like? How would they assign fair grades?
Educators may be willing to embrace the concept of ethical AI classes if given the resources, but there’s no clear framework on how that could be done. Should they begin teaching it in elementary school? Will it replace a class, be offered as an elective or be integrated into the existing curriculum? There’s no one-size-fits-all solution.
Many teachers have adopted “not in my classroom” policies in response to these concerns. As a result, students likely won’t learn how to use these tools the right way. Many will take whatever inaccurate, unregulated knowledge they cobble together from peers, parents and the internet as the truth — which may have far-reaching impacts.
There are several ways education professionals can address why AI ethics isn’t being taught in schools.
Integrating AI ethics into the classroom can mean creating a new elective or squeezing it in at the end of the period. Placing it where it naturally appears is best. For example, when students are assigned an eight-page essay, they can hear about responsible generative AI use.
Teachers and administrators should adapt these new courses to students’ grade, technical expertise and knowledge levels to make the subject matter age-appropriate and engaging. Assignments should be relatively open-ended but follow a clear ethical framework.
Typically, conversations about ethical AI focus on data privacy and output bias, centering around people who develop, deploy or manage these tools. End users interact with the user interface — not the backend — so their approach must differ slightly.
Core principles should be accountability, value alignment, explainability, fairness and integrity. Pupils should learn critical thinking to combat bias, reasoning to prevent overreliance and objective analysis to mitigate the spread of misinformation.
If state law or school board policies make teaching the ethics of AI in the classroom difficult, educators should take advantage of workarounds. Modeling responsible, honorable behavior goes a long way. Alternatively, they can invite a guest speaker.
Facilitating discussions, debates and group work in class lets high school teachers and college professors act as moderators. Posing questions about data de-identification, copyright infringement and summarization as they relate to AI can initiate productive conversations.
Digitalization’s popularity is exponentially increasing. Soon, AI may be a part of every industry in the world. If educators modernize their lessons for this modern age, they can ensure today’s youth have a headstart once they reach the workforce.