running speech-dispatcher as non-root using setuid on XO and accompanying security issues

Hemant Goyal goyal.hemant at gmail.com
Sun Jul 20 09:11:55 EDT 2008


Hi James,

The point I was trying to make was that the Sugar API itself could have
> removed the burden of setting these options from the developer.
>

Yes that is indeed what is happening :). And there is no API call involved
whatsoeverl with the present design :). Perhaps once the code is released
you'll get an idea whats exactly happening.


> An Activity could also, using the API, find out what the default values are
> without having to look at speechd.conf.
>

I think it would be useful for the activity developer in certain instances
to get the default speech synthesis settings. I'll look into this
requirement  and see if we can provide some methods through sugar for this
purpose.


> Or the Sugar Activity base class could automatically start up a speech
> client and give the Activity developer simple methods to use it in his
> application, or even automatically speech enable any Activity so that for
> instance the contents of the control with the focus could be spoken, or
> whatever it is that screen readers for the blind do.
>

Well, the main reason why sugar does not/should not handle requests for
speech-synthesis from all Activities is that we will need to re-write a  lot
of code to handle priorities/serializing speech synth requests from multiple
activities/maintaining Activitiy-specific settings (these were issues with
the initial DBUS API that we created, and the main reason for shifting to
speech-dispatcher). All these features are already available in
speech-dispatcher and hence communicating directly with the
speech-dispatcher server instead through sugar seems more optimal.

Also if Activities had to connect to a client that was started by sugar
would mean that sugar will have to provide the Callbacks that are at present
returned by speech-dispatcher directly to the Activity.

Thats why its best that Activities connect themselves to speech-dispatcher
and not communicate through sugar.

Automatically speech-enabling an activity is something I will be exploring,
however, from initial analysis its not that straight forward. How sugar will
"hack" into Activities and pull the data relevant for speech-synth is what
needs to be analyzed for this purpose.

I understood that the Sugar developers wanted to provide text to speech
> support to all Activities, even those written before TTS was available.  To
> do that you would have to change the Sugar base classes, etc. anyway.
>

Are you suggesting that just like the Activity Tool bars etc are provided to
Sugar Activities primitively, speech-synth should be provided primitively
when each Activity starts?

Cheers!
Hemant
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.laptop.org/pipermail/devel/attachments/20080720/33988ca0/attachment.html>


More information about the Devel mailing list