[laptop-accessibility] How can the XO be made accessible to blind
Mike Gorse
mgorse at mgorse.dhs.org
Thu Jan 3 19:16:53 EST 2008
Hi Hemant and others,
On Thu, 3 Jan 2008, Hemant Goyal wrote:
> One question that I have started wondering about now, is how "heavy" orca
> would be for the xo? The present speech server that we have written is in my
> assumption quite nice, and light weight with features specifically meant for
> the xo.
It's an interesting question and one that would need to be evaluated.
For a while I've been wanting to try to create an image with Orca on it to
evaluate performance and the amount of information that activities
currently expose via atk, but I haven't done this mostly due to lack of
time (much of my free time has been going towards adding atk support to
Abiword, and I want to get this code into trunk before starting any major
new projects). I'm curious if anyone has done this.
Orca has a bit over 2mb of Python code. This includes code to listen for
various AT-SPI events and do what is needed (speak when the focus changes,
a dialog pops up, the user is navigating a menu, the user presses a key,
etc), talk to Braille displays via brltty, magnify text, provide an
interface to Gecko / Firefox 3, etc. It has several "speech factories" --
classes that interface with speech servers such as gnome-speech and
speech-dispatcher (I think that it is going to become more common to use
it with speech-dispatcher than with gnome-speech). So you could write a
factory for your speech server. Another option, as I think I alluded to
in my last post, would be to adapt speech-dispatcher to use dbus and
understand your interface. Orca also has a "self-voicing" script that can
be used for self-voicing applications to make it get out of the way.
Imo, ensuring that activities expose appropriate information via atk would
be useful in the long term: aside from allowing Orca to provide
speech/Braille, it would allow activities that involve typing to become
usable by children with motor impairments (via gok), and packages such as
dogtail could be used to facilitate automated GUI testing. Gail will
generally expose information for ordinary widgets if it is loaded, but
custom code can be needed when things are done out of the ordinary or to
tell a screen reader that widgets are related to each other (labels for
edit fields in a dialogue, for instance). Ie, Abiword draws the document
in a GtkDrawingArea, so none of the information relating to the document
text gets exposed by gail.
Dependencies are another issue, however. As stated on the wiki
Accessibility page, at-spi (the accessibility infrastructure that Orca and
other a11y tools use to talk to applications) currently depends on ORBit
and Bonobo. At the last GNOME accessibility summit, there was talk of
moving AT-SPI to dbus, but no firm decision has been made so far (see
http://live.gnome.org/Boston2007/AccessibilitySummit/Summary). Some
investigation has since been done; see
http://live.gnome.org/GAP/AtSpiDbusInvestigation
> Indices should also be supported (espeak and Speech Dispatcher support
>
> Okay, thanks for the idea. I'll keep this in mind.
I didn't think to mention this in my last post, but you also need a way to
interrupt the speech. Screen readers typically do this, and the server
should be able to handle having speech started and stopped in rapid
succession. For instance, when arrowing around in a document, it is
typical for a screen reader to start reading the line of text when down
arrow is pressed. If the user presses down arrow a second time, a screen
reader will typically stop reading the line it was reading and start
reading the new line. Typing a key usually interrupts speech, with Orca
and screen readers for other operating systems.
Best,
-Mike G-
More information about the accessibility
mailing list