[laptop-accessibility] How can the XO be made accessible to blind

Mike Gorse mgorse at mgorse.dhs.org
Mon Dec 31 11:38:18 EST 2007


Hi Hemant,

Thanks for the email.

On Sun, 30 Dec 2007, Hemant Goyal wrote:

> We have been working on a simple screen reader for the XO and have made some
> headway. We have ported and customized eSpeak for the XO. A text to speech
> server has been written and methods exposed through Dbus . I have documented
> the work done till now at http://wiki.laptop.org/go/Screen_Reader. The DBUS
> api may be changed in the future. However, we still need to do some
> extensive testing and refine the structure of the speech server.

Interesting.

Have you looked at Speech Dispatcher 
(http://www.freebsoft.org/projects/speechd)?  It is supported by Orca 
and seems similar to the 
speech server you described.  It doesn't currently interface with dbus, 
although it could presumably be made to do so.  If you ever have 
self-voicing activities running on a system which is also running Orca, 
then they should use the same speech server so that they are not both 
trying to talk at the same time.

Indices should also be supported (espeak and Speech Dispatcher support 
them), since, if a user interrupts speech while reading a document, it is 
good to have the cursor left near the text that was being read before the 
speech was interrupted.

> We had initially planned to provide a simple highlight and speak option for
> the xo. We now think that we should scale up and structure the project to
> use eSpeak in a much more effective manner to provide accessibility to
> blind/low vision students.
>
> I think it would be brilliant if activity developers could exploit the
> underlying speech server to write accessible activities. For example, an
> activity at present can connect to the speech service through dbus and send
> it strings of text to be spoken. We hope to prepare some guidelines for
> activity developers to write accessible activities that could use the speech
> server. What would be best way to do this?

Are you intending for this to complement a screen reader such as Orca, or 
are you intending for all activities to be self-voicing?  Or are you still 
in the process of trying to decide that?  What the application should do 
would depend on the api's intended purpose.  There are also situations 
where "auditory icons" will get information across more quickly than 
spoken text (emacspeak, for instance, uses a set of such icons to 
supplement the speech).

I would also recommend looking at the Web Content Accessibility Guidelines 
if you haven't already (http://www.w3.org/TR/WCAG20/ for the latest 
draft).  They were written for the web, so not everything will apply to 
Sugar, but many of the general guidelines will apply, and it should give 
people a good idea of things to watch for.

Peace,
-Mike G-


More information about the accessibility mailing list