[OLPC Networking] Re: Ability to set the 802.11s TTL for outgoing packets

Lennart Poettering mzbycp at 0pointer.de
Mon Oct 9 16:00:22 EDT 2006


On Mon, 09.10.06 11:48, Javier Cardona (javier at cozybit.com) wrote:

> Hi Lennart,

Hi!

> The data frames in the current implementation of the libertas mesh
> routing protocol do not include the newer Mesh Forwarding Control
> field (where the TTL counter lives).  Instead, they are in the
> standard 4-address WDS format, which is the format supported by the on
> chip MAC controller.

So OLPC isn't really implementing that protocol that shall become
802.11s?

> However, a TTL counter is used in mesh management frames.  Management
> frames are used to discover the routes used to forward unicast frames.
> The TTL counter in mgmt frames limits the number of hops that a route
> request will travel.  Routing loops are possible in roots manually
> created by users (and this would cause infinite retransmissions!), but
> not in routes discovered by the mesh routing protocol.   This TTL is
> now a compile time variable, but could be made available to the driver
> if needed.

I guess this mgmt TTL would not be usable for my work, since it would
globally limit the range for all outgoing packets, from all
applications, not just for those generated by Avahi.

> We are currently working with Marvell in implementing support for
> in-mesh broadcast, for which we'll have to find a way to implement TTL
> in data frames.  But until that is complete, broadcast requests are
> not forwarded, and that could be just what you need:  in the current
> released firmware, a broadcast frame will only reach its direct
> neighbors.

Broadcasts that are not forwarded at all might be an interim solution
for my problem. However, they will not suffice in the long term.

The protocol design I am currently working on (and which already works
quite well in a simulation) is based on having each node cooperating
with nodes found nearby. When a node boots up, it sends out a single
welcome packet with TTL=1 (broadcast) If noone responds it will raise
the TTL to 2, and retry, and so on. It will stop when it managed to
reach (i.e. get a response from) at least 10 other nodes, or so.

Simply because not every node will necessarily run Avahi (just
think of the case when the CPU is powered off, but the WLAN chip still
forwards traffic) I need to do those broadcasts/multicasts with a
TTL != 1. If every single node would run Avahi those TTL=1 broadcasts
would be sufficient. 

I guess for a start i can simply use those broadcasts for my work,
like you suggest, and simply assume that Avahi runs on every single
node, however in the long run I'd really like to make use of different
TTL values.

> In summary:
> 
> The TTL in data frames would determine how many hops a data packet
> would travel.  This is not implemented yet.

Do you have any idea when this will be available?

> The TTL in management frames determines how far a node will look for
> nodes it has traffic for.  This is a compile time variable that could
> easily be exported to userland.

While I am not going to use this in Avahi this might really make sense
to export to userspace, nonetheless.

> In the current release, broadcast frames only reach a node's direct 
> neighbors.
> 
> Please let me know what would be the requirements to support your mDNS
> responder.

All what I need is a way to reach just the hosts "nearby". Because the
CPU of nodes nearby might not be powered on or running Avahi I need
the functionality to broadcast with a TTL > 1.

It's not really important whether this is actually implemented with
multicasting or broadcasting. I assume that most of the OLPC machines
*will* run Avahi, hence in effect there is not much of a difference if
I send my packets with a broadcast or with a multicast.

Please note that I will never use large TTLs, i.e. those > 10. Hence,
a dirty solution could be to define some magic MAC addresses like
FF:FF:FF:FF:FF:FE which behave like a broadcast with TTL=1 and so
on. If you did this there wouldn't be a need to change the frame
format.

Something completely unrelated:

I am currently doing lots of simulations for my new protocol. For that
I generate random mesh networks. However, I am not quite sure what
parameters to choose for these networks. My job is to get Avahi to
scale for up to 10.000 hosts, so that's one parameter. I read
somehwere that the diameter of the mesh graph is expected to be low,
in the range of 4-5. Is this true? What about the degree of the the
vertices (in the sense of graph theory, i.e. the number of connected
other vertices)? What is the maximum degree of vertices that is
expected in real-world networks? What are the parameters the Marvell
hardware has been designed for?

One last thing: is cross-posting to the libertas and olpc-networking
mailing lists a good idea? I started this because I didn't know which
one to use, but perhaps we should move this thread to only one of
them. (probably libertas-dev?)

Thank you very much for your response!

Lennart

-- 
Lennart Poettering; lennart [at] poettering [dot] net
ICQ# 11060553; GPG 0x1A015CC4; http://0pointer.net/lennart/


More information about the Networking mailing list