Salut/avahi/meshview issues

Michail Bletsas mbletsas at
Wed Jan 30 11:12:46 EST 2008

Sjoerd Simons <sjoerd at> wrote on 01/30/2008 10:46:33 AM:

> I did some research into mesh routing protocols before starting the 
salut muc
> work. From the research papers i've seen, proper multicast routing seems
> entirely viable. Traffic and memory overhead depend on the exact 
tradeoffs you
> make and the protocols used. So i see no reason why this can't be done 
> olpc's mesh network.

Given ample time, resources, many good programmers and a Turing machine, 
everything is possible.
We have something more than a Turing machine, but we are having serious 
shortages on the other fronts.
The distance from research papers to actual implementation is a great one.

> > We all understand how difficult is what we are trying to achieve. The
> > firmware hasn't changed much since you started working on this 
project. So,
> > let's drop the finger pointing and try to come up with realistic and
> > implementable solutions.
> As said, from my point of view, proper multicast routing is an entirely
> realistic thing.
No it is not, given the constraints at hand.

> Note that nobody is claiming MDNS is particularely suited for mesh 
> Because it's not. The reason why we used it, is that it was already 
> used on the
> olpc mesh even before salut came along and we just didn't have the 
> resources to
> do both a new presence protocol and a MUC protoocl. Also note that our 
> protocol uses multicast, the rationale for that was outlined when 
> proposed telepathy.
> Now the exact rationale doesn't matter much. The point is that we've 
> been quite clear about the fact that we're heavily using multicast. And 
> ever claimed that this was a bad/unrelistic thing (at some points there 
> even interns at OLPC experimenting with reliable multicast on the mesh, 
so it
> seems that even inside olpc multicast was regarded as a good thing). So 
> always (maybe naievely) assume the mesh did/could do proper 
> When we discovered the mesh did not do proper multicasting, we did 
> tell various
> people that this was going to be a bad thing. But apparently nobody 
> ever seemed
> to think this was a big deal untill recently.

We have found that out way before you did, hence the need to be able to 
transition from a p2p mDNS approach to the unicast server based one.
(What we still miss is the intermediate step, i.e. having XOs become 
presence servers -aka "supernodes" on demand)
The fact that some people were shocked when they realized that you can not 
cram 500 XOs under one roof and still expect to be passing traffic around 
when you rely heavily on basic rate multicast over mesh is not a reason to 
radically rethink everything from scratch.
We had discussed how important being able to control the flood would be 
very early on and hence the requirement for per application mesh TTL 
settings (so that we can even disable multicast flooding by setting the 
TTL to 1 for scenarios like the one in Mongolia) We can alway decrease the 
contention window if we increase the multicast rate. 

For completely serverless environments, what we have is invaluable. The 
fact that it doesn't scale to large numbers of nodes doesn't make it 

> > Yianni does testing, he doesn't care where specifically the problem 
> > all that he wants is to see something that works.
> Well for good testing he should have least have an idea where the 
problems are
> and what the issues involved are :) The scalability problem lies in 
> the current
> combination of the mesh implemenation and the mdns traffic, how exactly 
> going to solve that is still up for discussion.
I don't think that the issues that Yanni pointed out are directly related 
to the transport's multicast scalability issues.
We have serious problems making Avahi and even the Jabber server do their 
thing with small numbers of nodes, so let's not blame the transport for 


More information about the Devel mailing list