Keeping humanity at the center: Accessibility and artificial intelligence in education
Sign up
Sign up for the Worlds of Education newsletter.
Sign up
Sign up for the Worlds of Education newsletter.
Thank you for subscribing
Something went wrong
In a classroom in upstate New York, a special education teacher pulls up a set of Tobii eye-tracking devices, each calibrated to a student’s unique gaze patterns. Around the room, quiet concentration fills the air as students with cerebral palsy and autism prepare to begin their writing activity. Instead of pencil and paper, they use AI-enhanced augmentative and alternative communication (AAC) tools. With each blink or glance, the software anticipates their next word, using context-based prediction to complete phrases, ask questions, and express ideas that might have once taken minutes to form.
What was once a painstaking process of letter-by-letter selection has transformed into a rhythm of conversation. The AI learns from each interaction, adapting to tone, vocabulary, and context. For the first time, some students are holding entire classroom discussions through eye gaze or single-key input, responding to questions, debating ideas, and even telling jokes.
The teacher, moving between stations, is not focused on grading or data entry. She is focused on connection and ensuring every student has a voice that can be heard.
Across the country, in Alaska, a speech-language pathologist begins a small group session using a very different set of tools. On her screen are tabs for ChatGPT, Perplexity, NotebookLM, and SchoolAI. Each platform serves a distinct purpose: One helps generate conversation prompts, another curates vocabulary lists, and another simulates realistic dialogue for students to practice. These tools are woven into her speech-language pathology sessions to help students meet communication goals, explore vocabulary, and engage in structured conversations that feel personal and relevant.
For students who once struggled to find the right words or sustain interest, these AI-powered sessions have opened new possibilities. The technology does not replace the educator’s expertise; instead, it amplifies it, allowing her to tailor practice to each learner’s needs while keeping engagement high.
How teachers are using AI for students with disabilities
These stories are not isolated. Throughout the United States, educators are experimenting with new ways to use artificial intelligence to meet the needs of diverse learners. According to a recent report from the Center for Democracy and Technology, nearly 6 in 10 teachers of students with disabilities said they used AI to help develop individualized education and accessibility plans for their students during the 2024–2025 school year.
In the United States, students with disabilities have a legal right to a free and appropriate public education. Under two federal laws—the Individuals with Disabilities Education Act and Section 504 of the Rehabilitation Act—students with clinically diagnosed disabilities are entitled to customized or individualized plans that outline how schools will support them through specialized instruction, services, and accommodations, such as extended time or assistive technology to access learning on an equal basis with their peers. Students with these individualized plans are more likely than their peers to have back-and-forth conversations with AI tools. According to the Center for Democracy and Technology, 73 percent of students with disabilities reported having used AI in this way, compared to 63 percent of those without individualized plans for a documented disability.
Balancing opportunities with risks
While AI technology holds tremendous educational promise for students with disabilities, students, schools, and parents should be aware of and closely consider several risk factors, including impact on social connections and skills, academic ability, and data privacy.
More than 40 percent of students say that they or a friend of theirs interacted with AI to get mental health support, which is a troubling trend that raises concern about this technology. Sixty-four percent of all students believe AI weakens important skills students need to learn, like writing, reading comprehension, and conducting research, and 57 percent of students with an individualized plan said they feel that using AI in class “makes me feel as though I am less connected to my teacher.”
Additionally, if AI is not used with a clearly identified purpose, appropriate rules, and protections, students with disabilities are more likely to have sensitive medical and biometric data (such as the retinal scanning technology described above) exposed to AI systems, which can be dangerous and put student privacy at risk. In the United States, students with disabilities have a much higher rate of suspension and expulsion from schools than their peers, and they make up 65–75 percent of children in the juvenile justice system. Data like this increases the biases in AI algorithms and may lead to the unfair labeling of students with disabilities as being potentially troublesome or problematic, an issue that is compounded when some school systems share student data with law enforcement agencies. School systems are notoriously vulnerable to ransomware attacks and data breeches, and these student records may contain highly sensitive medical information and other data that is protected by law for good reason.
All of this underscores the need for human-centered, accessible, and transparent AI design that is developmentally and age-appropriate, with all necessary guardrails in place to keep our students safe.
Empowering teachers: NEA’s resources and tools
In 2024, the National Education Association (NEA) Task Force on Artificial Intelligence in Education released a report on the responsible use of AI in schools, addressing how AI’s potential and risks uniquely affect students and educators with disabilities, especially when accessibility, privacy, and bias are not prioritized. At its core, NEA’s proposed policy statement asserts that humans must remain central to teaching and learning.
In addition, NEA has developed practical guidance to help educators evaluate AI tools through an equity and accessibility lens. This guidance supports educators in determining whether AI technologies are inclusive of all students, especially those with disabilities, and ensures that all tools used in classrooms align with the principles of accessibility, fairness, and human-centered learning. The development of this resource began with an honest assessment of the current landscape. The goal was to create something simple, digestible, and useful for every educator, regardless of their experience with AI. Too often, accessibility is reduced to compliance checklists. NEA has reframed it as a fundamental part of inclusive teaching.
The following principles can help ensure that technology serves all learners.
1. Leadership by those most impacted
True accessibility only can be achieved with the guidance, input, and support of people with disabilities. Disabled students, educators, and families should play a leading role in designing, developing, and evaluating AI tools. Their leadership ensures that technology advances inclusion and autonomy rather than surveillance or efficiency alone.
2. Account for different levels of AI literacy
Educators’ familiarity with AI varies widely. Some are eager early adopters, while others are skeptical or have limited access to technology. Effective resources should be clear, scaffolded, and grounded in ethical use and accessibility, ensuring that everyone can participate meaningfully.
3. Keep humanity at the center
AI can never replace authentic, human-to-human relationships. NEA stresses that educators, not algorithms, must guide how technology is used. AI should enhance communication, creativity, and differentiation in classrooms, not undermine them.
4. Prioritize data protection and bias mitigation
While AI can save time and foster innovation, it can also reproduce bias, misidentify students of color or those with disabilities, and create unsafe data practices. NEA calls for transparent, ethical systems designed with strong data protections and diverse representation in decision-making at every level.
5. Include educators, students, and families as experts
AI policy must be informed by those closest to teaching and learning. Including educators, students, and families in decision-making builds shared understanding, greater transparency, and ensures that AI serves real classroom needs.
The message is clear and universal: Technology must support (not supplant) effective teaching and learning and serve a clear purpose set by education professionals. By centering healthy human relationships, safety, well-being, and education equity, it is possible to have a future where AI supports a high-quality education that allows students of all abilities to reach their highest potential.
Disability justice reminds us that true inclusion means recognizing people’s many overlapping identities and experiences and ensuring everyone can participate fully. It also means understanding that we all rely on one another. The individuals most affected by discrimination and barriers should be the ones guiding how technology is created and used.
Applying these principles to AI means valuing all forms of learning, communication, and expression. It means building systems that meet people where they are and viewing accessibility as a shared act of care that benefits everyone.
To see NEA’s suite of resources on AI in education, visit nea.org/ai.
The opinions expressed in this blog are those of the author and do not necessarily reflect any official policies or positions of Education International.