Monday's Testing

Greg Smith (gregmsmi) gregmsmi at cisco.com
Fri Mar 28 08:33:05 EDT 2008


Hi Wad, et al,

I think some use cases and requirements definition would be helpful
here. I don't think the field understands clearly what Mesh and
collaboration support and that can lead to misalignment of design and
usage.

Here is an example of possible use case definition, hopefully relevant
variables for SW design:

1 - School has server and Active Antenna (AA) (1,2, or 3) (example?
maybe Uruguay)

2 - School has wireless AP and server (Nepal)

3 - School has wireless AP and no server (Cambodia)

4 - School has no wireless AP or server (aka no internet ala some
schools in Peru)

5 - School has n (50, 100, 150, 200) Xos on simultaneously

6 - XO closest to AP or AA providing intermediate hop to internet for
Xos (1, 2, 3 max? hops away)

7 - n (50, 100, 150) Xos in class and m laptops providing intermediate
hops to internet (I can also see a whole bunch of "tree" cases and
combinations of roots and branches getting to the internet).

Looking at the collaboration "layer" (may need Mesh, ad hoc and eJabber
versions, may also need AP, AA and no net versions, not sure)

8 - n (1,2,3 max?) classes of students of m each (up to 50?) using XO
simultaneously. Classes are x feet apart and Xos 2 feet apart within a
class (does physical granularity matter at this level?). No need to Mesh
(L2) or collaborate (L5?) between classes. (different subnets?)

9 - One class in the school yard with kids running around and two others
in class. Means XOs turned on and off and moving closer and farther away
rapidly (my twins out ran me at 3 years old I hope 50 x 3rd graders
can't out run the dynamic mesh :-)

10 - Two students sharing a book or activity with each other *25 (max?)
for all pairs of students in class.

11 - Teacher sharing an activity with all 50 students *n classes within
a school (1,2,3,4, n) (also list activities if relevant - e.g. web
browsing vs. others?)

12 - Two students watching a video (don't know if its supported, but
find high BW and low BW examples) *25 for class

13 - Teacher sharing video with all students (same note as above)

14 - 1/2 class sharing high BW, half sharing low BW activities

15 - Students form groups of 3 - 5 who all share (low and high BW)
activities *n groups per class. Groups forming and dividing rapidly at
start then settling down.

16 - Everyone turns on XO at the same time *n (50, 100, 150 etc). Class
starts with 2 - 3 Xos firing up every minute for 10 minutes. Another
class in range has all 50 Xos on already.

Etc.

I hope I didn't munge my mesh and collaboration layers too much.

My point is that at the end of this testing you need to have some clear,
user understandable supported setups. Nail one or two, bound them well,
and say they are supported. 

Also, define what "supported" means. For example:

A - Works with same speed as XO solo or works %x slower

B - Access internet takes up to 50 seconds for first packet out and
latency of .1 second after that per XO hop away from AP

C- Mouse move on one XO has y latency to appear on second XO and y + z
latency to appear on n (>1) Xos.

Etc.

Nail a few supported uses and we can drive everyone to start with those.
Even better, give general guidelines and list unsupported uses.

That's a quick brainstorm on my part but I haven't actually used XO to
collaborate. Ask the schools and educators how they use it or want to
use it. It takes a long time to develop a meaningful dialog but find
some representative users who get back to you quickly for starters.

If all of these use cases are supported, that's great as long as they
all work. You should still say what is supported at the user level as
people will have other ideas that we never even thought of...

Way too much work to do before Monday but think of one or two cases you
know work after this testing. Then ask educators if they fit. Then we
tell all customers to start with that!

HTHs.

Thanks,

Greg S

***********
Wad -

Some people will say that reactive protocols are bursty and route
acquisition time is long. I don't disagree.
But I believe we have room for improvement without any radical (and
costly) change. The key is to adapt.
We are very focused now on dense clouds (for good reasons), but our
parameters are sub-optimal for this scenario.

In a dense scenario, we should:
1 - Eliminate probe responses
2 - Increase contention window
3 - Increase route expiration time
4 - Increase multicast transmission rate

My suggestion for the Cambrige testbed is:
1 - Validade probe response driver patch submitted by Marvell and
implement it
2 - Increase contention window from 7,31 ro 31, 1023
3 - Increase route expiration time from 10 to 20 seconds
4 - Increase mcast rate from 2 to 11 Mbps.

All of the above are trade offs and should be considered in dense mesh
scenarios only. Based on what I see in my own testbed, they will reduce
the duration of bursts and also make you more resilient to them. 

***************
It is safe to change these values on the fly.
Marvell was discussing doing it automatically.
A simple heuristic to check if there is congestion is to check the
retransmission counters. So this is a relatively simple to implement
adaptive behavior.

M 



More information about the Devel mailing list