🔎 Accessible Usability Scale (AUS): Prioritizing Inclusive Usability The Accessible Usability Scale (AUS) is a focused, accessibility-centered usability metric designed specifically for users of assistive technologies. Developed by Fable and launched in 2020, AUS builds on the foundation of SUS — but adapts it to capture how inclusive and accessible digital products really are from the perspective of people with disabilities. 1️⃣ Collecting Feedback from Assistive Technology Users Collect responses from users who rely on assistive tech like screen readers, screen magnifiers, voice control, or switch devices. The questionnaire includes 10 statements, each rated on a 5-point Likert scale from “Strongly Disagree” (1) to “Strongly Agree” (5). Statements focus on frustration, navigation, clarity, and task completion with assistive tools. 📚 For a deeper dive into the questionnaire, you can explore the official AUS resource page provided by Fable: 2️⃣ Calculation Scoring AUS is nearly identical to SUS: • For positive items (1, 3, 5, 7, 9): (response - 1) × 2.5 • For negative items (2, 4, 6, 8, 10): (5 - response) × 2.5 • Total score = Sum of all 10 items → Range: 0–100 🎯 Score Meaning: Higher score = better perceived usability for assistive tech users. 3️⃣ Interpreting the Results Fable’s data across 2,100+ sessions suggests considering the average AUS score is 65 and the average across different assistive technologies: • Screen Magnifier Users → 72 • Alternative Navigation Users → 67 • Screen Reader Users → 56 🔎 Pros & Cons of Using AUS ✳️ Advantages: • Accessibility-focused – designed specifically for AT users. • Simple & Familiar – Based on SUS; quick and easy to implement as well as for participants to complete. • Produces quantifiable scores and can be used alongside qualitative feedback for depth. • Free and open for anyone to use. Licensed under Creative Commons. Attribution 4.0 International (CC BY 4.0) → Can be used, adapted, and shared, as long as you give appropriate credit to Fable. • Complements SUS in broader usability testing. ❌ Disadvantages: • Reflects feelings and perceived ease of use, not technical accessibility compliance. • Most useful post-task (after completing flows or sessions). Users must interact deeply with the product for results to be meaningful. • Not as widely adopted or recognized as SUS (but gaining traction). The illustration in the document is sourced from the official website. 💬 Have you tried using AUS or included AT users in your UX research? 👇 Share your thoughts below & check references in the comments. #Accessibility #AUS #InclusiveDesign #UX #UXMetrics #AssistiveTechnology #UsabilityTesting #ProductDesign
User-Centered Accessibility Evaluation
Explore top LinkedIn content from expert professionals.
Summary
User-centered accessibility evaluation is a process that puts real people with disabilities at the heart of assessing how accessible digital products and services truly are, going beyond compliance checklists to focus on practical usability. This approach relies on direct observation, feedback, and tailored persona profiles to ensure designs work in real-world contexts for those using assistive technologies.
- Observe real users: Watch how people with disabilities interact with your products in their everyday environments to uncover barriers and discover what works.
- Build accessibility personas: Create detailed user profiles that highlight the unique challenges faced by different groups, guiding designers and developers to make thoughtful, inclusive decisions from the start.
- Test with actual tools: Ensure your accessibility evaluation includes the specific assistive technology devices and software your users rely on, rather than just the ones you happen to support or prefer.
-
-
"Everything tested as accessible. Real users still couldn't access it." Last week's accessibility meeting revealed a gap that exists in most of our product work. Our accessibility specialist told us his story: He'd created documents meticulously - alt text on every image, color contrast verified, proper structure, validation tools all passing. Everything tested as 100% accessible. 🚧 Real users couldn't use them. The issue? The assistive tech he tested with wasn't what users actually relied on. Different screen readers work differently. His "accessible" documents worked perfectly, for the wrong tools. His response: He spent a week at a school for blind and disabled students. Not testing. Just observing. Learning which apps they used, why they chose them, what actually worked in their reality. This is what real user research looks like. 📍 MY REALIZATION I create documents daily for every employee in our company. I follow best practices. But do I understand the consultant searching during a client call?The employee with dyslexia? The person using a screen reader? I'd been optimizing for compliance, not for humans. This applies to everything we build: - Features shipped because competitors have them - Best practices followed without verifying they fit our context - Testing in our environment, not where customers work - Metrics that look good but don't reflect real success 📍 THE FRAMEWORK Before starting any work, answer these: - Who specifically? Name 3 real people who will use this. What's their context? - With what tools? What do they actually use—not what you support? - In what environment? Rushed office? Commuting? Client site? - Trying to accomplish what? What's the job to be done? - How will you observe? Where can you watch them naturally? - What are you assuming? List assumptions you haven't validated. The shift: Replace "Does this pass tests?" with "Does this work for actual humans?" 📍 THE PRACTICAL DIFFERENCE Before: "Alt text added ✓" After: "Observed Sarah's screen reader. Revised alt text to match her mental model." Before: "Feature shipped ✓" After: "Watched 3 customers struggle. They need X first. Reprioritizing." Before: "Best practice implemented ✓" After: "Tested with our users. Doesn't fit their workflow. Adapting." 📍 YOUR EXERCISE THIS WEEK - Pick one thing you're building - Identify who uses it - Spend 1 hour observing them (video call works) - Watch without intervening - Note what surprised you 📍 THE BOTTOM LINE Compliance ≠ Usability Standards ≠ Solutions Tests ≠ Understanding You can pass every test and fail every user. User research isn't a phase. It's the foundation. Start with real people, real contexts, real tools. Everything else is assumptions. What's one assumption you're making about your users right now? When's the last time you observed them in their actual environment? #ProductManagement #UserResearch #Accessibility #UserExperience
-
👩🦰 Designing Accessibility Personas (https://lnkd.in/evVnB4hd). How to embed accessibility and test for it early in the design process ↓ We often assume that digital products are merely that — products. They either work or don’t work. That they help people meet their needs or fail on their path to get there. But every product has its own embedded personality. It can be helpful or dull, fragile or reliable, supportive or misleading. When we design it, willingly or unwillingly, we embed our values, views and perspectives into it. Sometimes it’s meticulously shaped and refined. And sometimes it’s simply random. And when that happens, users assign their perception of the product’s personality to the product instead. Products are rarely accessible by accident. There must be an intent that captures and drives accessibility efforts in a product. And the best way to do that is by involving people with temporary, situational and permanent disabilities into the design process. One simple way of achieving that is by inviting people with disabilities in the design process. For that, we could recruit people via tools like Access Works or UserTesting, ask admins of groups and channels on accessibility to help, or drop an email to non-profits that work in accessibility space. Another way is establishing accessibility personas for user journeys. Consider them as user profiles that highlight common barriers faced by people with particular conditions and provide guidelines for designers and engineers on how to design and build for them. E.g. Simone, a dyslexic user, or Chris, a user with rheumatoid arthritis. For each, we document known challenges and notable considerations, designing training tasks for designers and developers and instructions to simulate experience through the lens of these personas. By no means does it replace proper accessibility testing, but it creates a shared understanding about what the experiences are like. You can build on top of Gov.uk’s profound research project (https://lnkd.in/evVnB4hd) — it also explains how to set up devices and browsers, so that each persona has their own browser profile. Once you do, you can always switch between them and simulate an experience, without changing settings every single time. All Accessibility Personas (+ Tasks, Research, Setup) https://lnkd.in/evVnB4hd Accessibility doesn’t have to be challenging if it’s considered early. No digital product is neutral. Accessibility is a deliberate decision, and a commitment. Not only does it help everyone; it also shows what a company believes in and values. And once you do have a commitment, and it will be much easier to retain accessibility, rather than adding it last minute as a crutch — because that’s where it’s way too late to do it right, and way too expensive to make it well. [Useful pointers in the comments ↓] #ux #accessibility
-
What is the EAA all about? The more I read the act, the simpler it becomes, and the more it becomes about customer experience design rather than compliance. The first thing that is apparent is that WCAG is not the EAA and the EAA is not WCAG. Anyone who tells you that you need to be WCAG compliant to meet the EAA needs to go back and read the act. The use of WCAG is "voluntary", and the same goes with any other EN, ISO, BBC, A11yQuest or other standards, guidelines or guidance frameworks. They are all full of good and useful things, but they are only part of the answer, not the answer itself. What the the EAA does focus on is approaches and outcomes. For the approach the POUR principles are key. Whatever it is you are designing, users need to be able to Perceive, Operate and Understand what the product or service is, regardless of any impairment or condition the user. People should have designed multi-modal experiences. Te product or service should also be Robust in that it works with all end user equipment including native and third party assistive technologies and user preferences. People have different needs, and they have preferences that support coping strategies that meet those needs. If those preferences aren't supported then their needs aren't met and CX barriers are designed into a product, and then you will not be EAA compliant. For outcomes you need to demonstrate that the approach had a positive impact on the whole audience. The thing about POUR is that POU can only be accurately measured as an outcome with user data. Only I can tell you whether as a dyslexic and ADHD person and customer I was able to Perceive, Operate and Understand your product. A checklist can only tell you if you took a particular approach, it can't tell you if the approach was successful. Even Robust can have a user outcome lens, particularly when it comes to segmenting the outcomes data to find out where the issues are. I would suggest that there is another lens to the act that can also help evaluate is that you should find out how satisfied your disabled and neurodivergent customers are, and measure that against how satisfied your non-disabled customers are. This is where real inclusion and EAA compliance lies, in inclusive customer satisfaction data and not in a checklist. The next time someone tries to tell you that WCAG auditing is the answer to EAA compliance, ask them to do a little more homework first. #EAA #UXDesign #UXResearch #Compliance #Accessibility #Inclusion
-
🖥️ Screen Reader Testing ≠ Keyboard-Only Testing Think you’re “screen reader testing” a website by hitting TAB? You might actually be testing something else entirely. As someone who is blind and uses a screen reader every day, I can tell you—that’s not how screen reader users navigate web pages. TAB navigation simulates a keyboard-only user—someone navigating with a keyboard due to a motor/dexterity disability. It jumps between focusable elements like links, buttons, and form fields… and skips over important web content like: 📄 Paragraph text 📄 Headings 📄 Images without links 📄 Lists Screen reader users, on the other hand, move through all content—whether it’s clickable or not. If you’re testing a website with a screen reader, try: 🎯 JAWS/NVDA (Windows): Down arrow 🎯 VoiceOver (Mac): VO + Right Arrow 🎯 VoiceOver (iOS) / TalkBack (Android): One-finger swipe left → right These commands let you move line-by-line (or element-by-element) and hear the entire page—just like an actual screen reader user would. It’s not enough to test with a screen reader—you have to test like a screen reader user. This is why it’s essential to test with native screen reader users and other people with disabilities to get a complete, authentic perspective on accessibility. #Accessibility #A11y #ScreenReader #InclusiveDesign #AssistiveTech #WebAccessibility #DigitalInclusion
-
Did you know that following WCAG to the letter still might not create a truly accessible experience? I see this all the time. A team runs their automated scans, fixes every violation, and checks "accessibility" off their project list. But then a real user with a disability tries to complete a task on their site and hits barriers that no compliance tool flagged. Here's what I've learned about the difference between compliance and true accessibility: Compliance asks: "Does this pass the test?" True accessibility asks: "Can everyone actually use this?" That might mean: • A form that's technically compliant but confusing to navigate with a screen reader. • A video player that meets contrast requirements but is impossible to control with voice commands. • A checkout process that passes automated scans but takes someone with cognitive disabilities 20 minutes to complete. Don't get me wrong - WCAG compliance matters. It's your foundation. But it's not your finish line. The gap between "technically accessible" and "actually usable" is where the real work happens. That's where you test with people who actually use assistive technology. That's where you ask hard questions about user experience, not just code compliance. True accessibility isn't about avoiding lawsuits (though that's important too). It's about respecting the civil right of equal access to information and services. It's about ensuring that anyone, regardless of ability, can interact with your digital spaces with dignity. Every barrier we remove opens a door. Every improvement you make could be the change that makes it possible for someone to apply for a job, access medical care, or get the help they need. WCAG is your starting point, not your finish line. True accessibility is about people, not just rules. #Accessibility #Inclusion