[sugar] New activity: Speak
Joshua Minor
j at lux.vu
Thu Jan 10 15:05:54 EST 2008
On Jan 10, 2008, at 11:23 AM, Edward Cherlin wrote:
> On Jan 10, 2008 1:27 AM, Joshua Minor <j at lux.vu> wrote:
>> Hi everyone,
>> I made a new activity called Speak....
>>
>> http://wiki.laptop.org/go/Speak
>
> This is wonderful, because it will allow children to experiment with
> language, not just type in normal text.
:)
>
> In espeak, phoneme sets and orthographies can be added for any
> language. Do you support this?
Speak calls the espeak command line tool to query the available
languages as well as to generate the audio, so any new or changed
voices in espeak will show up in Speak automatically. It does filter
out the Mbrola voices because they don't actually produce sound. I
plan to experiment with calling espeak via their API but I will make
sure to avoid any limitation on the set of languages.
> Can this or the Screen Reader project be adapted to reading content,
> such as the children's picturebooks provided in the Library? (We would
> presumably need a text file to go with each document.)
>
> I think that it would be a great boost for child and adult literacy
> both if little children could sit on their parents' or grandparents
> laps and have the XO read them both a story.
XO is the new Teddy Ruxpin :)
I was thinking of adding a toolbar tab to allow for some sort of game/
story/lesson modes. It would be cool if someone could write a plugin/
extension for a guessing game, story reader, spelling game (ala
TalknType) or something like that. I have also considered wrapping
Speak into a reusable component so other activities could add a
talking face easily. I'm not sure of the best way to do this.
> In that same vein, would anybody be interested in creating a karaoke
> activity? Same-language captioning of Bollywood musicals is claimed to
> be the most effective literacy measure in India.
That would be awesome!
>> Also, if anyone has experience or ideas on how to get access to
>> espeak's per-phoneme timing data from python, please let me know.
>>
>> -josh
>
> Do you want to do that while running, or would a precomputed table
> meet your needs?
I would like to get callbacks for each phoneme while the voice is
playing, so that I can shape the mouth correctly for each one. If
done well, this could be a nice visual cue to help understand the voice.
I would also have to rework how espeak is wired up to gstreamer.
Right now I have espeak write out a wav file and then I play that
back via the gst module. I wasn't able to get them piped together in
a reliable way. Specifically when I run espeak --stdout and then
attach that to a gst pipeline that starts with an fdsrc, it only
works once. I was not able to restart or rebuild a new pipeline to
speak another sentence.
-josh
More information about the Sugar
mailing list