The reason we see icons flashing here and there in the mesh view.. i.e. "xmas tree effect"
giannisgalanis at gmail.com
Fri Dec 14 10:38:58 EST 2007
The test showed that the effect is not a result of a network failure.
It occurs naturally, every time a new host arrives, while at the same time
another host appears dead.
"Dead" can also mean a host that simply disconnected fro the channel by user
The best and simplest way to recreate the effect in any environment(noisy or
not) is to:
1.Connect successfully 3 XOs in the same mesh.
2.Move successfully XO1,XO2 to another channel., and verify the show as
"failed" when running "avahi-browse" in XO3
3.Reconnect at the same time XO1,XO2 to the initial channel.
4.While the XOs are trying to connect(30sec) check they still show are
"Failed" when running "avahi-browse" in XO3
5.Observe the screen in XO3: the icons of XO1,XO2 will jump almost at the
To my best understanding,
It is not related to a noisy envirnment
Does not require a large number of laptops
Can be recreated in 100% of the times you try the above.
I believe that if the emulator you operate, uses the proper timeouts, you
will see the effect
On Dec 14, 2007 4:31 AM, Sjoerd Simons <sjoerd at luon.net> wrote:
> On Thu, Dec 13, 2007 at 11:18:01PM -0500, Giannis Galanis wrote:
> > THE TEST:
> > 6 XOs connected to channel 11, with forwarding tables blinded only to
> > selves, so no other element in the mesh can interfere.
> > The cache list was scanned continuously on all XOs using a script
> > If all XOs remained idle, they all showed reliably to each other mesh
> > Every 5-10 mins an XO showed as dead in some other XOs scns, but this
> > shortly recovered, and there was no visual effect in the mesh view.
> Could you provide a packet trace of one of these XO's in this test?
> tcpdump and run ``tcpdump -i msh0 -n -s 1500 -w <some filename>''.
> I'm surprised that with only 6 laptops you hit this case so often.
> Ofcourse the
> RF environment in the OLPC is quite crowded, which could trigger this.
> Can you also run: http://people.collabora.co.uk/~sjoerd/mc-test.py<http://people.collabora.co.uk/%7Esjoerd/mc-test.py>
> Run it as ``python mc-test.py server'' on one machine and just ``python
> mc-test.py'' on the others. This should give you an indication of the
> amount of
> multicast packet loss.. Which can help me to recreate a comparable setting
> here by using netem.
> > If you switched an XO manually to another channel, again it showed
> "dead" in
> > all others. If you reconnected to channel 11, there is again no effect
> > the mesh view.
> > If you never reconnected, in about 10-15 minutes the entry is deleted,
> > the corresponding XO icon dissapeared from the view.
> > Therefore, it is common and expected for XOs to show as "dead" in the
> > cache for some time for some time.
> > THE BUG:
> > IF a new XO appears(a message is received through Avahi),
> > WHILE there are 1 or more XOs in the cache that are reported as "dead"
> > THEN Avahi "crashes" temporarily and the cache CLEARS.
> > At this point ALL XOs that are listed as dead instantly disappear from
> > mesh view.
> Interesting. Could you file an trac bug with this info, with me cc'd ?
> Everything should be made as simple as possible, but not simpler.
> -- Albert Einstein
> Devel mailing list
> Devel at lists.laptop.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Devel