Making speech-dispatcher python API support asynchronous socket communication

Hemant Goyal goyal.hemant at gmail.com
Sat Jun 21 08:14:55 EDT 2008


Hi,

Thank you all for your ideas.

@Tomeu : I looked into the gobject.idle_add and luckily Tomas has an example
for me too :). I will be trying that out next.

@Tomas:

I was facing a similar problem there and I solved it by passing all
> callbacks to the gidle thread.  All SD communication is performed from the
> GUI thread, but the callbacks don't perform any actions themselves -- they
> only schedule these actions to be performed within the idle thread.  It
> works surprisingly well.


Hmmm, would this mean that if I got a WORD_SPOKEN event notification then
the callback to be executed for this event will be executed immediately? If
that happens then I guess the solution would be pretty nice. Could it be
possible that such event notifications become queued and get executed much
after the word has been spoken ? (Developers in the OLPC community would be
very interested in providing a karaoke style hightlight effect during speech
synthesis in their gtk activities).

@Hynek:


> This really seems like the former problem with socket functions
> behaving in a strange way in threads with pygtk unless the
> gtk.gdk.threads_init() is called. The current sample implementation
> you've sent in the attachment still partly uses threads (through speechd)
> but doesn't call this function, which now looks likely to cause the
> problems.


Precisely it is and the OLPC community would be interested in providing the
solution to the problem at the API layer instead for two reasons :

   1. Simplifying the process for the activity developer
   2. Avoiding the use of threads



> Now, what are the reasons why you want to avoid using threads?
>

I had a discussion in the olpc irc channel and the reason for avoiding
threads given was that the laptop uses a Suspend and Resume feature for
power saving and using threads within the gtk activity would prevent the
laptop from suspending.



> This should be considered carefully as not to create problems later
> when for instance we want to switch from a text socket protocol
> to something else. Because if I understand correctly, the proposed
> solution would necessarily be based on exporting the socket file
> descriptors in use to the client program.


The present diff that I have provided does *not* expose the socket to the
client program. It simply uses the speechd thread for handshaking and shifts
to async communication after that... (It does not work thats a different
thing..)

I will try using the gobject.idle_add approach next and see if I make any
headway in that direction now.

Thanks,
Hemant
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.laptop.org/pipermail/devel/attachments/20080621/ec7cfc3a/attachment.html>


More information about the Devel mailing list