Overview

Plain Language

AI Can Create Plain-Language and Easy-Read Versions of Every Written Material on Earth

Summary | Plain language is an important part of disability rights. The tools we have today are almost able to create plain-language versions of every piece of writing. These tools will be able to do more than just translate words that have already been written. They will be able to answer new questions from readers with disabilities as soon as they ask. It is important that we use people with disabilities to design and test this technology.

By Rylin Rodgers

In my work at the intersection of disability policy, technology, and equitable access to information, one theme comes up repeatedly: plain language isn’t a “nice to have” — it is a fundamental rights issue.

Whether we are talking about government benefits, workplace accommodations, healthcare instructions, court notices, or voting rights, access to information determines access to participation. Yet plain language is too often an afterthought.

We are living through an extraordinary moment. Emerging AI tools — used responsibly, with guardrails and human expertise — can meaningfully expand (though not replace) the plain-language practices disability communities have built over decades. In conversations with disabled people*, especially lifelong plain language users with intellectual and other developmental disabilities (IDD), I increasingly hear examples where AI is already helping people understand, question, and navigate information on their own terms.

We are on the verge of a transformational possibility: to create plain-language and easy-read versions of every written document on the planet — now, not someday. And we can do it with the tools that already exist: Microsoft Copilot, Immersive Reader, Narrator, text-to-speech, and other built-in accessibility features.

A pair of hands rest on a laptop computer keyboard.

The path forward to accelerate access to information honors the expertise behind human-created plain language, while also exploring how dynamic, user-controlled AI interactions can expand access far beyond what static plain language alone has achieved.

As AI accelerates accessibility, it will be critical for disabled people to be part of co-designing at the product level and to have the opportunity to learn how to use these tools at the individual level to achieve the outcomes they want. At Microsoft, this means ongoing partnerships with disabled people and disability-led organizations, along with efforts to expand training opportunities for all.

Laws Are the Floor — Not the Ceiling

In the United States, the Plain Writing Act of 2010 (Public Law 111–274) requires federal agencies to communicate in ways the public can understand and use. It establishes a baseline expectation — clear forms, clear public documents, and accessible federal websites. Enforcement has been uneven, but the legal standard is clear: plain language is an accessibility requirement, not a stylistic preference.

Globally, requirements are becoming even stronger. Under the European Accessibility Act (EAA), which reached full enforcement in 2025, product instructions and documentation must be understandable.

Some EU members have gone even further, adopting CEFR B2 readability levels for consumer materials and public-facing communication. In 2024, the European Disability Forum (EDF) pushed this expectation into the political sphere, requesting that the EU President’s State of the Union address include an easy-to-read summary and accessible formats. The message from both U.S. and global policy is unmistakable: governments recognize that inaccessible information excludes people and creates inequity. But these legal requirements represent only the floor, not the ceiling.

It is one thing to aim for a target reading level. It is another to recognize that reading level alone cannot meet every person’s needs. Like everyone else, plain language users have a wide range of interests and information needs. A static, plain-language document cannot adapt when a reader gets stuck, confused, curious, or overwhelmed. It cannot personalize. It cannot adjust for cultural context, language background, cognitive load, trauma, or learning differences. Accessibility is increased by the ways AI can individualize content.

Building on Static Plain Language

For decades, the disability community has led the work of building, modeling, and teaching plain language. Self-advocates, family leaders, researchers, and communication specialists have produced the frameworks that the rest of the world is only now beginning to recognize as essential. Organizations like AUCD, with its Plain Language Style Guide and ASAN with its Guide to Making Easy Read Resources, have pushed this work forward with clear, disability-led principles about what makes information understandable, respectful, and truly accessible. These community-created standards are not just writing rules — they are values statements about autonomy, dignity, and the right to know.

This powerful foundation has shaped policy, raised the standards for accessibility, and helped entire systems begin to understand comprehension as a civil right. But it has also created something else: a desire for more. When people experience accessible information, they recognize how much more they need and deserve.

This is where AI provides an opportunity to move beyond the limits of static plain language.

Traditional plain-language documents, even when created with great skill and deep respect for disability culture, remain static. They offer one simplified version intended to support most readers. But “most readers” has never meant all readers — especially people with disabilities, nonnative English speakers, people with limited literacy, or people who process information differently. Static plain language provides a single explanation that may not match the reader’s background and cannot anticipate every question. It does not offer multiple ways to understand a concept. It cannot personalize for cognitive load or emotional context, cannot respond in real time when someone gets stuck, and may still feel confusing or overwhelming, even when beautifully designed.

Advocacy organizations — including self-advocacy groups, DD councils, and disability justice collectives — have spent years teaching the world that information must be understandable to be usable, raising expectations for governments, schools, healthcare systems, and employers. And because of this dedicated foundation, people now rightfully expect clarity, support, and access.

Over the past few years, disabled people have told me something essential: plain language helps, but it doesn’t always go far enough. Sometimes people want to ask questions privately. Sometimes they need an example tied to their own life. Sometimes the explanation makes sense in the moment but falls apart under stress or new information. And often, people simply want another way to understand something — a diagram, a list, a metaphor, or a slower pace.

This does not imply that the foundation fell short; rather, it succeeded by helping people understand what they truly deserve. And now, AI can help meet that need. AI shifts the experience from one-directional reading to interactive understanding. Instead of receiving a single, simplified version, people can explore, question, and personalize information in ways that static formats simply cannot.

The disability community’s decades of advocacy created the blueprint. AI — used responsibly, with disabled people shaping it — can extend that blueprint into a world where information bends toward the learner, not the other way around.

What is Possible: A Layered, Interactive Experience

When someone uses an AI tool like Microsoft Copilot to read a complex document, they can ask a follow-up question in their own words, request analogies or concrete examples, simplify content to a specific grade level, and translate information into another language. They can break down long paragraphs, ask unlimited clarifying questions, request visual diagrams, or step-by-step breakdowns.

Example: Understanding Benefits

A person reading a government benefits guide might ask:

“What does ‘means-tested’ mean?”

“Give me an example that’s easy to understand.”

“Now explain how this affects SSI.”

This conversational structure resembles a skilled human support person — except the reader controls the pace, the sequencing, the privacy, and the depth, an opportunity for person-driven, individualized accessibility to extend access for readers who need support beyond a single, fixed version.

The Opportunity for the Disability Community to Shape Responsible AI Use

Across the disability community, there has been thoughtful debate about how generative AI should—and should not—be used. Some organizations have urged caution or even argued that these tools should not be used at all for accessible communication. I understand these concerns deeply. AI can be inaccurate, biased, or harmful when used carelessly, and in disability-centered communication, precision and respect matter.

But I also see an enormous opportunity that will be lost if we only treat AI as something to be feared. Rejecting AI outright risks missing several critical opportunities:

  • Teaching disabled communities how to use AI safely and effectively
  • Shaping how AI models handle disability content and accessibility principles
  • Demanding transparency, auditing, and accountability from AI developers
  • Increasing access right now while continuing to fight for better systems
  • Building confidence, literacy, and technological power—not fear

Disabled advocates have done the work — long before the tech sector — to develop guidelines, test with real users, and insist that clarity is a civil right. The recent AUCD Plain Language Style Guide is just one example of this leadership. It offers disability-led guidance on sentence structure, formatting, tone, examples, cognitive load, and respectful language. It reminds us that plain language is not merely about reading level; it is about values: autonomy, dignity, access, and the right to understand. Because of this work, people now expect more. They know what it feels like when information meets them where they are, and they want that experience everywhere—in schools, courts, hospitals, workplaces, and government programs.

The disability community deserves not only access to AI tools but the ability to shape them, challenge them, and demand more from them.

What Responsible AI Use Looks Like

Responsible use of AI in plain language work does not mean allowing AI to generate content without oversight. It means treating AI as a supportive tool—one that operates with humans firmly in control, within transparent guardrails and grounded in disability-centered design. When used well, AI can assist in ways that honor the principles the disability community has spent decades developing. The goal is not to surrender authorship to technology, but to harness technology in the service of clearer communication, broader access, and deeper understanding.

Clarity is a Civil Right

A core element of responsible use is ensuring that AI understands who the intended audience is. This reflects a foundational value in disability-led communication work: accessible information must start with the needs of the reader, not the preferences of the writer. By giving AI prompts that specify a reading level, an audience’s familiarity with a topic, or the need for simpler sentence structures and everyday vocabulary, we set the stage for more accurate and accessible output. These kinds of instructions give AI the scaffolding it needs to support accessibility rather than unintentionally undermining it.

Another essential practice is breaking the task into manageable steps. Disability advocates have long taught that comprehension is achieved through layering: first, offering a summary; then, simplifying; then defining specific terms; and finally checking for understanding. AI should follow that same structure.

Asking AI to summarize a paragraph before rewriting it in plain language, or to list and define any unfamiliar concepts in simple words, supports clarity and allows people using the tool to maintain control over each phase of the process. This step-by-step method mirrors how a skilled human editor would guide someone through creating accessible content.

Responsible use also recognizes that AI is just one part of a larger accessibility ecosystem. When AI-generated plain language is paired with tools like Narrator, Read Aloud, or Immersive Reader, information becomes truly multimodal—visual, auditory, simplified, translated, and interactive. This allows people to adjust text spacing, highlight important lines, hear content read aloud, or translate it into another language. In practice, it means information can flex around a person’s needs, strengths, and preferences. This is disability-centered design in action: technology supporting multiple pathways to understanding.

Even with these tools, human review remains nonnegotiable. Plain language specialists, disability advocates, and content experts must look closely at AI drafts, especially when the material involves rights, benefits, legal issues, sensitive personal information, or disability culture and terminology.

Humans must decide what is accurate, respectful, and appropriate. The final responsibility always lies with people, not machines.

Responsible use also means building skills and confidence rather than dependence. Training disabled communities in how to design effective prompts, check AI’s work, and integrate accessibility features empowers people. It allows them to ask questions privately, move through information at their own pace, personalize explanations, revisit concepts without stigma, and explore complex topics independently.

The opportunity before us is the chance to develop disability-led AI literacy. The disability community has always been a leader in accessible communication. The movement for plain language began here. Easy Read was pioneered here. Screen readers, captioning, text-to-speech, and alt text standards were adopted early and championed by disabled innovators long before mainstream technology caught up.

Now, a new frontier is emerging: disabled people shaping how AI is used, how AI is trained, and how AI understands disability. This includes teaching communities to use AI intentionally, ensuring disabled voices influence how AI systems are evaluated and improved, shaping how AI models interpret disability language, and building expectations for accuracy, respect, and equity into every stage of development. When disabled people engage directly and critically with AI—not as passive users but as designers, questionaskers, testers, and standardsetters—we do more than adopt a new tool. We define what responsible use looks like for everyone.

AI is neither a threat nor a solution on its own. It is a powerful tool. And when disabled people lead its responsible use, AI becomes a tool for expanding access, understanding, autonomy, and civil rights.

Layering AI With Digital Accessibility Tools

AI does not replace digital accessibility — it enhances it.

When AI-generated plain language is combined with built-in accessibility tools, information becomes multimodal and more inclusive.

Narrator reads AI-simplified text aloud for people who benefit from auditory processing.

Text-to-Speech / Read Aloud converts simplified or easy-read content into audio instantly. Immersive Reader adds: adjustable font, spacing, focus, picture dictionary, and translation into dozens of languages.

The Result:

Text + simplified text + visuals + audio + translation = unblocked access to information.

This layering is the true power of AI combined with accessibility tools.

Toward a World Where All Information Is Understandable

Historically, producing high-quality plain language or easy-read materials required significant time, specialized training, adequate funding, thoughtful design, thorough human review, extensive testing, and multiple rounds of iterative edits.

These practices remain important—human involvement is still essential. However, with AI, generating a first draft—or even fifty drafts—is no longer limited by cost or time constraints. AI makes it possible to scale up production, provide instant customization, translate content in real time, personalize the level of detail, rapidly iterate, and continuously update materials.

For the first time, we can realistically envision a world where every policy document is accompanied by a plain-language version, every school sends out easy-read notices, every voter guide is available in simplified and audio formats, every hospital provides easy-to-understand discharge instructions, every workplace offers materials at multiple reading levels, and every court form comes with interactive assistance. This future is not far off—it is possible right now.

An Unprecedented Opportunity

We have an opportunity to achieve something unprecedented: a world where everyone can understand the information they need to live, work, vote, learn, and make decisions. Laws on plain language provide the framework, accessibility tools offer the necessary mechanisms, and AI enables us to scale these efforts. This moment is not about replacing human expertise, but about expanding its reach into every home, document, story, and decision. The question is not whether we should use AI in plain language, but rather how we can use AI to empower disabled people by strengthening their power, autonomy, and access. I hope we can continue this important conversation together.

Editor’s Note: This is an easy-read version of the original article generated by the author, using artificial intelligence.

Easy Read: Using AI to Make Writing Clear for Everyone

Who is this for?
  • People who want simple and clear writing.
  • People who like short sentences.
  • Disabled people, including people with intellectual and other developmental disabilities.
  • People learning English or who do not read big words.
Key messages
  • Everyone has the right to understand information.
  • AI can help make writing easier to read.
  • AI should not replace real people.
  • Laws say writing must be clear, but we can do even better.
Why this matters

When words are too hard, people may miss out on things they need. This can include:

  • Services
  • Money or benefits
  • Jobs
  • Voting
  • Health care

Clear information helps people make choices and have control in their lives.

Plain language laws

United States: A law says the government must write in clear and useful ways.

European Union: A law says information must be easy to read, understand, and use.

These laws are the starting point. We should try to make writing even clearer for each person.

Plain language vs. AI help
  • A plain-language document is helpful, but it cannot answer new questions. It stays the same.
  • AI lets you ask questions in your own words. You can get answers that are simpler and fit your needs.
What AI can do

AI can help by:

Explaining big or hard words

  • Giving examples
  • Summarizing long writing
  • Translating into other languages
  • Showing steps one at a time
  • Making simple pictures or diagrams

Example: Understanding benefits

If a benefits guide uses a hard word like “means-tested,” you can ask AI:

  • “What does ‘means-tested’ mean?”
  • “Can you give me an example?”
  • “How does this relate to SSI?”
  • AI can answer each question in simple words.

You can keep asking until you feel sure you understand.

How to use Microsoft Copilot today

You can use Copilot in Microsoft 365, Edge, Teams, or on the web.

Here are some easy prompts you can give it:

  • “Rewrite this to a 6th-grade level with short sentences.”
  • “Summarize this paragraph. Then make the summary simple.”
  • “List hard words and explain what they mean in simple language.”
  • “Make an Easy Read version with very short sentences and clear headings.”

Tip: Save prompts you like so you can use them again.

Use AI with accessibility tools

AI works even better when used with tools like:

Narrator: reads text out loud

Read Aloud / Text-to-Speech: changes text into audio

Immersive Reader: changes font, spacing, colors, and languages

These tools help people by giving:

  • Text
  • Simple text
  • Audio
  • Visuals
  • Translation
Responsible use (keeping people in control)
  • People should review text made by AI, especially for rights, money, health, or legal topics.
  • Use kind and respectful words.
  • Watch out for bias.
  • Use clear headings and steps.
  • Test writing with disabled people when you can.
Words to know (glossary)

Plain language: writing that is easy to read and understand.

Easy Read: very simple writing with short sentences and clear layout.

Accessibility: making things usable for everyone.

Means-tested: a rule that checks your money to see if you qualify.

SSI: a U.S. benefit for people with low income or few resources.

Your next steps
  1. Pick something you want to understand.
  2. Ask Copilot to rewrite it using simple words.
  3. Use Immersive Reader or Read Aloud to listen to the text.
  4. Ask more questions until you feel sure you get it.
  5. Share feedback with the writer to make future versions better.
Power,choices

Plain language and Easy Read help people have power and make choices.

AI can help make writing clear for everyone.

With people reviewing the work and using accessibility tools, we can make every document easier to understand — starting now.

Author

Rylin Rodgers is director of disability policy at Microsoft in Washington, D.C. rylinrodgers@ microsoft.com

Editor’s note: Impact generally uses person-first language, but acknowledges authors’ preference for identity-first language.