The reason we see icons flashing here and there in the mesh view.. i.e. "xmas tree effect"

Giannis Galanis giannisgalanis at gmail.com
Fri Dec 14 10:38:58 EST 2007


The test showed that the effect is not a result of a network failure.
It occurs naturally, every time a new host arrives, while at the same time
another host appears dead.
"Dead" can also mean a host that simply disconnected fro the channel by user
intervention.

The best and simplest way to recreate the effect in any environment(noisy or
not) is to:
1.Connect successfully 3 XOs in the same mesh.
2.Move successfully XO1,XO2 to another channel., and verify the show as
"failed" when running "avahi-browse" in XO3
3.Reconnect at the same time XO1,XO2 to the initial channel.
4.While the XOs are trying to connect(30sec) check they still show are
"Failed" when running "avahi-browse" in XO3
5.Observe the screen in XO3: the icons of XO1,XO2 will jump almost at the
same time.

To my best understanding,
It is not related to a noisy envirnment
Does not require a large number of laptops
Can be recreated in 100% of the times you try the above.
I believe that if the emulator you operate, uses the proper timeouts, you
will see the effect

yani

On Dec 14, 2007 4:31 AM, Sjoerd Simons <sjoerd at luon.net> wrote:

> On Thu, Dec 13, 2007 at 11:18:01PM -0500, Giannis Galanis wrote:
> > THE TEST:
> > 6 XOs connected to channel 11, with forwarding tables blinded only to
> them
> > selves, so no other element in the mesh can interfere.
> >
> > The cache list was scanned continuously on all XOs using a script
> >
> > If  all XOs remained idle, they all showed reliably to each other mesh
> view.
> > Every 5-10 mins an XO showed as dead in some other XOs scns, but this
> was
> > shortly recovered, and there was no visual effect in the mesh view.
>
> Could you provide a packet trace of one of these XO's in this test?
> (Install
> tcpdump and run ``tcpdump -i msh0 -n -s 1500 -w <some filename>''.
>
> I'm surprised that with only 6 laptops you hit this case so often.
> Ofcourse the
> RF environment in the OLPC is quite crowded, which could trigger this.
>
> Can you also run: http://people.collabora.co.uk/~sjoerd/mc-test.py<http://people.collabora.co.uk/%7Esjoerd/mc-test.py>
> Run it as ``python mc-test.py server'' on one machine and just ``python
> mc-test.py'' on the others. This should give you an indication of the
> amount of
> multicast packet loss.. Which can help me to recreate a comparable setting
> here by using netem.
>
> > If you switched an XO manually to another channel, again it showed
> "dead" in
> > all others. If you reconnected to channel 11, there is again no effect
> in
> > the mesh view.
> > If you never reconnected, in about 10-15 minutes the entry is deleted,
> and
> > the corresponding XO icon dissapeared from the view.
> >
> > Therefore, it is common and expected for XOs to show as "dead" in the
> Avahi
> > cache for some time for some time.
> >
> > THE BUG:
> > IF a new XO appears(a message is received through Avahi),
> > WHILE there are 1 or more XOs in the cache that are reported as "dead"
> > THEN Avahi "crashes" temporarily and the cache CLEARS.
> >
> > At this point ALL XOs that are listed as dead instantly disappear from
> the
> > mesh view.
>
> Interesting. Could you file an trac bug with this info, with me cc'd ?
>
>  Sjoerd
> --
> Everything should be made as simple as possible, but not simpler.
>                -- Albert Einstein
> _______________________________________________
> Devel mailing list
> Devel at lists.laptop.org
> http://lists.laptop.org/listinfo/devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.laptop.org/pipermail/devel/attachments/20071214/7b7ab2ad/attachment.html>


More information about the Devel mailing list