Collaboration Requirements
Ricardo Carrano
carrano at laptop.org
Thu Jul 31 00:02:30 EDT 2008
Dear Pol, Greg and Michael,
There is so much going on here that it's difficult to approach.
I mostly second Michael's comments. Though Greg obviously took a lot
of his time to put these goals together, I think we are missing the
target. It's good to have goals such as "40 XOs being able to chat on
a quiet environment" but right now, they seem just too arbitrary. I'm
not saying that the requirements are not legitimate, or do not come
from legitimate sources (deployments), I'm just saying that we're
approaching the scalability problem in an unrealistic way. I'll try to
explain why and them I'll propose an alternative approach (i.e. where
to put our efforts)
Characterization is hard
=================
Spectrum conditions are *very* difficult to characterize. Having a
spectrum analyzer will give you some idea of the conditions in a point
in space (where the spectrum analyzer is) and in a certain time
interval (the consolidation interval) that may be completely
irrelevant or misleading. Many times it's like predicting the growth
of an specific plant on Alice's garden based on the average annual
temperature on the whole country.
My point is that 10 nodes may be able to chat until someone opens the
door or an elevator stops on the floor. What I mean is that the only
quantitative measurement that's of any value is the theoretical
maximum limit. What we need to know is how many nodes will be able to
chat under ideal (and unreal) conditions and them clarify to all the
involved that there is no way to achieve that, ever (with the current
implementation, or "build"). That all we can do is to wait for the
elevator to go away, for the microwave oven to be turned off, or for
the neighbor to stop downloading an mpeg via his access point. How do
we get there is the next topic.
Analytical models are necessary
========================
Each application has it's own demands and expectations must be set
according to these demands. Activities requirements on terms of
traffic (ideally bandwidth, delay, jitter, but minimally bandwidth)
should be known. This is how you determine if a given link can or
cannot support, say, a VoIP conversation. We must be able to model the
traffic demands of our collaboration software likewise. What is the
traffic generated by a chat between 10 XOs if each participant types
one message of 20 bytes at every ten seconds?
Once you have that number you should take the transport into
consideration. For example, by determining that each of the chat
messages will be encoded in a UDP frame of 460 bytes, that will be
transmitted at 2Mbps, and will consume 2ms to be transmitted. On top
of that you should consider how will this frame flood the mesh, if
that's the case, i.e. computing the number of retransmissions.
You do that and this will give you a number. You validate this on a
testbed that tries to emulate the most favorable environment. If you
get anywhere near you analytical model, you're good to go. If not,
understand why and try to determine if your model is flawed
(monitoring the testbed will tell you) or if your testbed is too way
under optimal (some experience required to say so, but basically you
try to change the environment and repeat the measures to see if there
was improvements).
Improvements are mandatory
=====================
In parallel you do your improvements in the stack. You try to write
more efficient applications, middleware and protocols to achieve the
same result. You trim out unnecessary overhead, you compact, you
aggregate, you wait before transmitting so maybe you don't need to.
There is a lot we already know on that front that we really need to
implement (I agree with Pol on that). We can send beacon and probe
requests less frequently, we can raise the route expiration time, just
to mention two things that do not imply any change in code. But we
also need to change code, to substitute one protocol for another, etc.
I don't want to start discussing this now. I am just basically trying
to say that efforts to improve scalability should happen in parallel
to the modeling and analysis and should be a *permanent* effort in the
development of the whole stack.
In short. We have a limited resource - the shared spectrum. The only
effective thing to do are:
* design/implement a less spectrum demanding collaboration
* build analytical models of this collaboration and try to extract
realistic expectations from it.
Cheers!
Ricardo
On Wed, Jul 30, 2008 at 11:32 PM, Polychronis Ypodimatopoulos
<ypod at mit.edu> wrote:
> Dear Greg and Michael,
>
> It seems to me that we spend more time discussing things, instead of
> implementing them. The issue of scalability in large ad-hoc networks has
> been around for more than a decade and some pretty descent research
> results have been out there for several years now. Even if you pick one
> randomly you are guaranteed to scale by a whole order of magnitude
> better than OLPC's current implementation. Just pick one and implement
> it. I'm afraid that it is no exaggeration if I say that, from a network
> engineering standpoing, the current collaboration mechanism is literally
> the worse one possible, scaling quadratically with the number of nodes
> no matter if an access point is used or not. I do not mean to sound
> condescending, but rather note that it is very easy to improve on our
> current situation.
>
> I would rather see us spending our time iterating through implementation
> of a viable solution, large-scale testing (anyone testing collaboration
> with _scale_ in mind using 2-3 XOs should just be fired) and thinking
> about how to build and use feedback mechanisms (that do not involve
> humans) from actual deployments in schools in the US (where an internet
> connection is dependable) wrt our collaboration technology.
>
>
> Pol
>
> --
> Polychronis Ypodimatopoulos
> Graduate student
> Viral Communications
> MIT Media Lab
> Tel: +1 (617) 459-6058
> http://www.mit.edu/~ypod/
>
> _______________________________________________
> Devel mailing list
> Devel at lists.laptop.org
> http://lists.laptop.org/listinfo/devel
>
More information about the Devel
mailing list