Home New Trending Search
About Privacy Terms
#
#ScreenReader
Posts tagged #ScreenReader on Bluesky

That's an interesting question...
Anybody have a take on this one?
#a11y #ScreenReader

0 0 0 0
Original post on fed.interfree.ca

For a project I was looking for a #screenreader #accessible soundboard for Windows, where you assign hotkeys to a sound, and they play without bringing another window into focus. I'm surprised nobody has made this, with all of the radio broadcasters in our community. Surely Google is just […]

1 0 1 0
Original post on dragonscave.space

I believe that the #accessibility of everyday tech for #screenReader users is on a slow but consistent decline. Operating systems, browsers, messaging apps, email clients, even command line tools.

These things are not being replaced with more #accessible alternatives, but nor does the […]

0 12 2 0

Tbh, it pains me how many people don’t give a shit about all those who use a #ScreenReader; even if ”that dark place“ is doing better than the #BlueSky in #Accessibility 🫣

Is it really all about nothing but jizzing out your pics as fast as you can? Does #Inclusion matter to you at all?

2 0 0 0
Image of the BFW NVDA Project team with text "spendenübergabe" (Donation handover) above and "BFW Berufsförderungswerk Würzburg gGmbH" (BFW Vocational Training Center Würzburg) below

Image of the BFW NVDA Project team with text "spendenübergabe" (Donation handover) above and "BFW Berufsförderungswerk Würzburg gGmbH" (BFW Vocational Training Center Würzburg) below

Our latest In-Process blog post is out and it's a big one!

- NVDA 2025.3.3
- Restarting after updating
- Where do you get your info?
- #NVDA20
- Thank you Germany!
- NVDA 2026.1 Beta 4
- One week with NVDA

All that and more right here: www.nvaccess.org/post/in-proc...

#NVDA #NVDAsr #ScreenReader

1 0 0 0

I want to improve my accessibility beyond WCAG 2.2.

Screen reader users, any preferences when it comes to alt text?

Shorter?
Detailed?
Depends?

I want to hear from the community.

#A11y #ScreenReader #AltText #Blind #LowVision #InclusiveDesign#AccessibilityMatters #BlindTech #accessibility

0 0 0 0

#Himmelskäfer
Mit #ScreenReader ist es in den Mitteilungen nicht möglich Sich das Profil der Person anzeigen zu lassen, die die Antwort geschrieben hat.
Der Himmel ist nicht barrierefrei

So ist Teilhabe nur bedingt möglich.

9 2 0 0
Preview
Common Accessibility Problems When Using a Screen Reader - The Accessibility Guy Learn common screen reader accessibility issues and how to fix them using proper headings, links, alt text, tables, and lists.

If you are not testing with a screen reader, you are missing real user barriers.

Shawn Jordison highlights common accessibility failures and why they matter. Are you testing beyond automation?

buff.ly/bnrDk9R

#A11y #DigitalAccessibility #UX #WCAG #ScreenReader #AT #AssistiveTechnology

1 1 0 0

Manches geht mit #ScreenReader nicht so einfach. Manches geht gar nicht. Die Anwendung ist nicht barrierefrei. Es gibt noch einiges zu verändern. Aber da ist kein Wille vorhanden.

1 2 0 0
NVDA logo and text in white on purple background with grey sunburst designs around

NVDA logo and text in white on purple background with grey sunburst designs around

NV Access is pleased to announce NVDA 2025.3.3 is now available for download.

This is a patch release to fix a security issue.

Full details & download: www.nvaccess.org/post/nvda-20...

(For those on the 2026.1 Beta, a beta with this patch will be out soon).

#NVDA #NVDAsr #ScreenReader

2 0 0 0
Video thumbnail

Hier hat jemand das große Los gewonnen. So viele Emojis und die alle im User Namen… Und wie ScreenReader nutzenden müssen das alles vorlesen lassen, bevor wir den Beitragstext bekommen. #OhrenMüll #ScreenReader #ScreenReaderLesung

93 51 5 5

Alternativtext?!
Wer, was, wie...
Wieso weshalb warum wer nicht fragt bleibt dumm ‼️ wir antworten
Fragen zum Thema #blindleben #Teilhabe #ScreenReader #Inklusion

Was interessiert euch? Was wollt ihr wissen? versteht
Eure Fragen helfen auch anderen. #Solidarität #GemeinsamSindWirStark

14 7 1 0
You can see a yellow square, 800x800px.

At the bottom, in pink, it says: punx.social.
The rest of the text is in black.
At the top, it says: punkstondon.de.
On the left, it says: punk.photos.
On the right, it says punkstodon.de.

In the middle, in bold and huge letters, it says:
PLEASE USE ALT TEXT.
The word “please” is crossed out.

You can see a yellow square, 800x800px. At the bottom, in pink, it says: punx.social. The rest of the text is in black. At the top, it says: punkstondon.de. On the left, it says: punk.photos. On the right, it says punkstodon.de. In the middle, in bold and huge letters, it says: PLEASE USE ALT TEXT. The word “please” is crossed out.

Englisch:

P̶L̶E̶A̶S̶E̶ USE ALT TEXT!

It's been a week since our Fedi-Punk workshop on di.day.
Since then, more users have joined both “our” instances and others.
Once again, welcome!

When you post images, p̶l̶e̶a̶s̶e̶ pay attention to alt text.

The […]

[Original post on punkstodon.de]

1 0 0 0
Zu sehen ist ein gelbes quadrat, 800x800px

unten steht in pink: punx.social
die restliche schrift ist in schwarz
oben steht: punkstondon.de
links steht: punk.photos
rechts steht punkstodon.de

In der Mitte steht in fett und riesig:
BITTE NUTZT ALTTEXT
bitte ist dabei durchgestrichen.

Zu sehen ist ein gelbes quadrat, 800x800px unten steht in pink: punx.social die restliche schrift ist in schwarz oben steht: punkstondon.de links steht: punk.photos rechts steht punkstodon.de In der Mitte steht in fett und riesig: BITTE NUTZT ALTTEXT bitte ist dabei durchgestrichen.

B̶I̶T̶T̶E̶ NUTZT ALTTEXT!

Unser Fedi-Punk Workshop anläßlich des di.day ist jetzt eine Woche her.
Seitdem sind sowohl bei "unseren" Instanzen, als auch bei anderen mehr User*innen dazu gekommen.
Noch einmal Herzlich Willkommen.

Wenn ihr Bilder postet […]

[Original post on punkstodon.de]

0 1 0 0
Original post on fed.interfree.ca

A bit of a long shot: Do I know any #blind #NVDA #screenreader users who use #eloquence, and also have unicode or accented characters in your username? If so, could you try this addon: share.interfree.ca/app/open/BgrEksRM198-cPJtG4zRyz4-FXRWw6MoQXJ-htrEFTubCy1?view=1

And let me know if it works […]

0 1 0 0
Post image

NVDA 2026.1 Beta 1 is now available for testing! Info & Download: www.nvaccess.org/post/nvda-20...

There are so many updates they won't all fit here, so please read the post at the link!

#NVDA #NVAccess #ScreenReader #Beta #FOSS

0 1 0 0
Preview
GitHub - louderpages/Apple-RHVoice: Run-time RHVoice Apps for iOS/iPadOS/macOS Run-time RHVoice Apps for iOS/iPadOS/macOS. Contribute to louderpages/Apple-RHVoice development by creating an account on GitHub.

The iOS and macOS versions of RHVoice are now open source! github.com/louderpages/...
#screenReader #openSource

2 0 0 0
Preview
GitHub - louderpages/Apple-RHVoice: Run-time RHVoice Apps for iOS/iPadOS/macOS Run-time RHVoice Apps for iOS/iPadOS/macOS. Contribute to louderpages/Apple-RHVoice development by creating an account on GitHub.

The iOS and macOS versions of RHVoice are now open source! https://github.com/louderpages/Apple-RHVoice
#screenReader #openSource

1 0 0 0
Preview
How an accessibility designer adds keyboard shortcuts to a web app Keyboard shortcuts occupy a strange area for web design.

#Design #Explorations
Keyboard shortcuts for web apps · What it takes to design accessible keyboard shortcuts ilo.im/16a6y9 by Eric Bailey

_____
#Shortcuts #Keyboard #Accessibility #ScreenReader #Browsers #UiDesign #WebDesign #Development #WebDev #Frontend

2 0 0 0
Original post on punkstodon.de

Bei ZDF mitreden geht es heute um #screenreader Und da es hier ja gestern eine größere Diskussion um eben diese gab, poste ich den Link hier mal rein.
Auch wenn ich mir vorstellen kann, dass es sehr anstrengend ist, den vorgelesen zu bekommen […]

0 0 0 0

Die heutige #ZDFMitreden Umfrage des @ZDF richtet sich an Menschen, welche #Screenreader einsetzen:
www.mitreden.zdf.de/c/al/5vdz4gYSiiqD5sRlgzL...

#Barrierefreiheit #Sehbehinderung #ÖRR

0 1 0 0
If you're not a screen reader user yourself, you might be surprised to learn that the text to speech technology used by most blind people hasn't changed in the last 30 years. While text to speech has taken the sighted world by storm, in everything from personal assistants to GPS to telephone systems, the voices used by blind folks have remained mostly static. This is largely intentional. The needs of a blind text to speech user are vastly different than those of a sighted user. While sighted users prefer voices that are natural, conversational, and as human-like as possible, blind users tend to prefer voices that are fast, clear, predictable, and efficient. This results in a preference among blind users for voices that sound somewhat robotic, but can be understood at high rates of speed, often upwards of 800 to 900 words per minute. The speaking rate of an average person hovers around 200 to 250 words per minute, for comparison. Unfortunately, this difference in needs has resulted in blind people getting left out of the explosion of text to speech advancement, and has caused many problems. First, the voice that is preferred by the majority of western English blind users, called Eloquence, was last updated in 2003. While it is so overwhelmingly popular that even Apple was eventually pressured to add the voice to iPhone, mac, Apple TV, and Apple Watch, even they were forced to use an emulation layer. As Eloquence is a 32-bit voice last compiled in 2003, it cannot run in modern software without some sort of emulation or bridge. If the sourcecode to Eloquence still exists and can be compiled, even large companies like Apple haven't managed to find or compile it. As the NVDA screen reader moves from being a 32-bit application to a 64-bit one, keeping eloquence running with it has been a challenge that I and many other community members have spent a lot of time and effort solving. The eloquence libraries also have many known security issues, and anyone using the libraries today is forced to understand and program around them, as Eloquence itself can never be updated or fixed. These stopgap solutions are entirely untenable, and are likely to take us only so far. A better solution is urgently needed. The second problem this has caused is for those who speak languages other than English. As most modern text to speech voices are created by and for sighted users, blind users begin to find that the voices available in less popular languages are inefficient, overly conversational, slow, and otherwise unsatisfactory. While espeak-ng is an open-source text to speech system that attempts to support hundreds of languages while meeting the needs of blind users, it brings a different set of problems to the table. First, many of the languages it supports were added based on pronunciation rules taken from Wikipedia articles, without involving speakers of the language. Second, Espeak-ng is based directly on Speak, a text to speech system written by Jonathan Duddington in 1995 for RISC OS on the BBC Micro, meaning that espeak users today continue to have to live with many of the design decisions made back in 1995 for an operating system that no longer exists. Third, looking at the Espeak-ng repository, it seems to only have one or two active maintainers. While this is obviously better than the zero active maintainers of Eloquence, it could still become a problem in the future. These are the reasons that I'm always interested in advancements in text to speech, and am actively keeping my ears open for something that takes advantage of modern technology, while continuing to suit the needs of screen reader users like myself. Over the holiday break, I decided to take a look at two modern AI-based text to speech systems, and see if they could be added to NVDA. I chose two models, because they advertised themselves as fast, able to run without a GPU, and responsive. The first was supertonic, and the second was Kitten TTS. As both models require 64-bit Python, I wrote the addons for the 64-bit alpha of NVDA. However, other than making development easier, this had little effect on the results. Unfortunately, doing this work uncovered a number of issues that I believe are common to all of the modern AI-based text to speech systems, and make them unsuitable for use in screen readers. The first issue is dependency bloat. In order to bundle these systems as NVDA addons, developers are required to include a vast multitude of large and complex Python packages. In the case of Kitten TTS, the number is around 103, and just over 30 for supertonic. As the standard building and packaging methods for NVDA addons do not support specifying and building requirements, these dependencies need to be manually copied over, included in any github repositories, and cannot be automatically updated. Loading all of these dependencies directly into NVDA also causes the screen reader to load slower, use more system resources, and opens NVDA users up to any security issue in any of these libraries. As a screen reader needs access to the entire system, this is far from ideal. The second issue is accuracy. These modern systems are developed to sound human, natural, and conversational. Unfortunately this seems to come at the expense of accuracy. In my testing, both models had a tendency to skip words, read numbers incorrectly, chop off short utterances, and ignore prosody hints from text punctuation. Kitten TTS is slightly better here, as it uses a deterministic phonemizer (the same one used by espeak, actually) to determine the correct way to pronounce words, leaving only the generation of the speech itself up to AI. But never the less, Kitten TTS is still far from perfectly accurate. When it comes to use in a screen reader, skipping words, or reading numbers incorrectly, is unacceptable. The third issue is speed. Supertonic has the edge, here, but even it is far too slow. Unlike older text to speech systems, Supertonic and Kitten TTS cannot begin generating speech until they have an entire chunk of text. Supertonic is slightly faster, as it can stream result audio as it becomes available, whereas Kitten TTS cannot start speaking until all of the audio for the chunk is fully generated. But for use in a screen reader, a text to speech system needs to begin generating speech as quickly as possible, rather than waiting for an entire phrase or sentence. Users of screen readers quickly jump through text and frequently interrupt the screen reader, and thus require the text to speech system to be able to quickly discard and restart speech. The fourth and final issue is control. Older text to speech systems make changing the pitch, speed, volume, breathiness, roughness, headsize, and other parameters of the voice easy. This allows screen reader users to customize the voice to our exact needs, as well as offering the ability to change the characteristics of the voice in real time based on the formatting or other attributes of the text. AI text to speech models, being trained on data from a particular set of speakers, cannot offer this customization. Instead, they inherit the speaking speed, pitch, volume, and other characteristics that were present in the training data. Kitten TTS and Supertonic both offer basic speed control, however it is highly variable from voice to voice and utterance to utterance. This leads to a loss of functionality that many blind users depend on. If you'd like to experience these issues for yourself, feel free to follow the links above to my GitHub repositories. They offer ready to install addons that can be installed and used with the 64-bit NVDA alphas. I'm picking on Kitten TTS and Supertonic not because they're particularly bad for the above problems, but because they're the models that are the state of the art in AI text to speech right now when it comes to speed and size. Other models, like Kokoro, exhibit all of the same issues, but more so. So what's the way forward for blind screen reader users? Sadly, I don't know. Modern text to speech research has little to no overlap with our requirements. Using Eloquence, the system that many blind people find best, is becoming increasingly untenable. ESpeak uses an odd architecture originally designed for computers in 1995, and has few maintainers. Blastbay Studios has done some interesting work to create a text to speech voice using modern design and technology, that meets the requirements of blind users. But it's a closed-source product with a single maintainer, that also suffers from a lack of pronunciation accuracy. In an ideal world, someone would re-implement Eloquence as a set of open source libraries. However, doing so would require expertise in linguistics, digital signal processing, and audiology, as well as excellent programming abilities. My suspicion is that modernizing the text to speech stack that is preferred by blind power-users is an effort that would require several million dollars of funding at minimum. Instead, we'll probably wind up having to settle for text to speech voices that are "good enough", while being nowhere near as fast and efficient as what we have currently. Personally, I intend to keep Eloquence limping along for as long as I can, until the layers of required emulation and bridges make real time use impossible. Perhaps at that point AI will be good enough that it can be prompted to create a text to speech system that's up to our standards. Or, more hopefully, articles like this one may bring attention to the issues, and bring our community together to recognize the problems and find solutions.

"The State of Modern AI Text To Speech Systems for Screen Reader Users" by Samuel Proulx. Interesting read, not only for blind folks. Suprising outcome. stuff.interfree.ca/2026/01/05/ai-tts-for-sc... #blind #tts #screenreader

0 0 0 0
NV Access | In-Process 23rd January 2026

Our In-Process blog is back for 2026:

Switching from Jaws to NVDA
Running NVDA on a Mac
Creating a new NVDA Shortcut
Achievements in 2025, and we want to hear yours!
Using Clip with the command line!

All available now at: www.nvaccess.org/post/in-proc...

#NVDA #NVDAsr #ScreenReader #Accessibility

2 0 0 0
Preview
5 accessibility checks to run on every component - zeroheight Hidde de Vries explains how to test components for accessibility, from keyboard support to screen readers and zoom.

#Development #Techniques
Design system accessibility checks · Five key tests to run on every component ilo.im/169mxi by Hidde de Vries

_____
#Accessibility #WCAG #ScreenReader #Keyboard #Zoom #DesignSystem #DesignTokens #Development #WebDev #Frontend

2 0 0 0