<div dir="ltr">I gave a quick talk on OLPC and offline wikireaders, and had feedback from a few groups @ Wikimania:<br><br>* Bassem, from the Ar-language Moulin version (see <a href="http://moulinwiki.org">moulinwiki.org</a>)<br>
* Tim Starling, who maintains <a href="http://static.wikipedia.org">static.wikipedia.org</a><br> * Manuel Schneider (user:80686) of Wikimedia Switzerland, working on a zenoreader project (see <a href="http://wiki.directmedia.de/ZenoReader">http://wiki.directmedia.de/ZenoReader</a> for background) <br>
<br>Baseem wants to help develop ar versions of other dumps, and to get
localizers together to ensure that rtl versions of offline readers are
excellent. <br>Tim offered to generate dumps of article subsets for offline use (and encourages us to think about using HTML, which may be optimizable to take up a similar amt of space to the XML dumps when compressed)<br>
Manuel is working on a toolchain to produce a zeno file (which is basically a list of article and file URLs, with the form of compression for each (can differ by article and file)) and an open source reader for them, handling rendering and searching. He says these libraries are recently being used by kiwix -- Emmanuel, perhaps you can comment... <br>
<br>There was a good deal of interest from attendees in unifying the goals and roadmaps for the different reader projects. I think we can separate out questions of<br> * html vs. xml<br> * zeno file vs. zip archive<br> * all images vs. some images vs. no images<br>
* full articles vs. headers of articles<br><br>and make these options in the selection of a full toolchain. <br><br>Copying Liam (and Andrew), who have been asking about doing a podcast on the subject.<br><br>SJ<br></div>