headtracker / mag lens / macro lens (linaccess at yokoy.de)

linaccess at yokoy.de linaccess at yokoy.de
Wed Sep 5 18:03:17 EDT 2007


Hello Luke,

I will consider the paraboloid thing, thanks for the code.

I am not tracking only one point. I am tracking a bundle of points - say 5x3 to 20x12 - and it changes all the time with different ratio. Out of those pixels (and maybe some time) I have to build an AVG Pixel with a defined XY value. AVG pixel != brightest pixel. For that AVG pixel I could use subpixel, too. I have to use subpixel because I have to map the low resolution to 1200x900 pixel if I try to use absolute coordinates.
Where do I get the size 90x30 from? It is not an pixel exact value but a approximation. It differs.
I tried not to move my eyes but my head and looking to the four corners and the center of the XO display. The display is very small so I moved my head really not to much. One infrared filtered result merged from 5 snapshots (four eges and center) is here:
http://www.olpcaustria.org/mediawiki/upload/7/79/Headtracker_area_small.jpg
It is downsized from 640x480 to 320x240px but the relation is the same.

Maybe I have got a knot in my brain. I really like to get the headtracker working in a good way without an additional lens.
Again the link to the project site:
http://www.olpcaustria.org/mediawiki/index.php/Headtracker

greeting,
yokoy




On Wed, 5 Sep 2007 12:57:54 -0400
"Luke Hutchison" <luke.hutch at gmail.com> wrote:

> PS there's a "cheap" way you can accomplish subpixel accuracy, as follows.
> You basically take a bunch of 1D samples through the brightest pixel,
> looking at 2 neighbors in each direction, and then take a weighted sum of
> the results.  This calls the code I pasted in the last email.  It's not
> going to be as good as paraboloid regression, but it should allow you to
> test feasibility.
> 
> 
> // Do a 2D parabolic fit to give sub-pixel accuracy for translational
> // registration, by performing four 1D parabolic fits through the closest
> // integer correlation offset: horizontally, vertically, and on the leading
> // and trailing diagonals.  Take the weighted centroid of all four to get
> // the true correlation offset.
> 
> // off_x and off_y are the one-pixel-accurate correlation offsets recovered
> // by correlation.
> 
> // x1d and y1d are x and y values for the 1D quadratic fit function
> double y1d[9] = {     // Get magnitudes at (off_x,off_y) and 8 neighbors
>   dft_mag(dft, off_x - 1, off_y - 1),
>   dft_mag(dft, off_x    , off_y - 1),
>   dft_mag(dft, off_x + 1, off_y - 1),
>   dft_mag(dft, off_x - 1, off_y    ),
>   dft_mag(dft, off_x    , off_y    ),
>   dft_mag(dft, off_x + 1, off_y    ),
>   dft_mag(dft, off_x - 1, off_y + 1),
>   dft_mag(dft, off_x    , off_y + 1),
>   dft_mag(dft, off_x + 1, off_y + 1)
> }
> 
> // Sum contributions to centroid of each quadratic fit
> double x1d_tot = 0.0, y1d_tot = 0.0, x1d;
> 
> // Parabolic fit in horiz direction through correlation maximum
> x1d = parabolic_fit(-1, y1d[3], 0, y1d[4], 1, y1d[5]);
> x1d_tot += x1d;
> 
> // Parabolic fit in horiz direction through correlation maximum
> x1d = parabolic_fit(-1, y1d[1], 0, y1d[4], 1, y1d[7]);
> y1d_tot += x1d;   // [x1d is x in parabola space, but y in correlation
> space]
> 
> // Weight contributions of diagonal by the inverse of their distance
> #define RT2_OV_2  0.7071067811865475244   // sqrt(2)/2  (= 1/sqrt(2))
> 
> // Parabolic fit in leading diagonal direction through correlation maximum
> x1d = parabolic_fit(-1, y1d[0], 0, y1d[4], 1, y1d[8]);
> x1d_tot += x1d * RT2_OV_2;
> y1d_tot += x1d * RT2_OV_2;
> 
> // Parabolic fit in leading diagonal direction through correlation maximum
> x1d = parabolic_fit(-1, y1d[2], 0, y1d[4], 1, y1d[6]);
> x1d_tot -= x1d * RT2_OV_2;
> y1d_tot += x1d * RT2_OV_2;
> 
> // Take centroid of all parabolic fits, weighting diagonals by RT2_OV_2;
> // make relative to correlation coords by adding off_x, off_y
> double subpix_off_x = off_x + x1d_tot / (2.0 + 2.0 * RT2_OV_2);
> double subpix_off_y = off_y + y1d_tot / (2.0 + 2.0 * RT2_OV_2);
> 
> 
> 
> On 9/5/07, Luke Hutchison <luke.hutch at gmail.com> wrote:
> >
> > Where do you get the size 90x30 from though?  Are you saying you can't get
> > at the full-sized frame through the API currently?
> >
> > You really should consider fitting a paraboloid over the dot to get
> > sub-pixel resolution.  Note that if the dot is bigger (more than a few
> > pixels), you probably want to just use the weighted centroid, but if it's
> > small, a paraboloid is the right approach.  You really will get at least a
> > 10x increase in accuracy in both x and y, bringing your effective resolution
> > to something like 900x300 for the example you gave.  You may not even need a
> > lens.  I have used this before with success for an image processing project.
> >
> >
> > Here's the code for the 1D version:
> >
> > // Fit a parabola to three points, and return the x coord of the turning
> > // point (point 2 is the central point, points 1 and 3 are its neighbors)
> > double parabolic_fit(double x1, double y1,
> >                      double x2, double y2,
> >                      double x3, double y3) {
> >
> >   double a = (y3 - y2) / ((x3 - x2) * (x3 - x1)) -
> >              (y1 - y2) / ((x1 - x2) * (x3 - x1));
> >
> >   double b = (y1 - y2 + a * (x2 * x2 - x1 * x1)) / (x1 - x2);
> >
> >   double xmin = x2;       // Just use central point if parabola is flat
> >   if (fabs(a) > EPSILON)
> >     xmin = -b / (2 * a);  // [avoids division by zero]
> >
> >   // Use the following to calculate the y-value at the turning point
> >   // of the parabola:
> >   //
> >   //   double c = y1 - a * x1 * x1 - b * x1;
> >   //   double ymin = a * xmin * xmin + b * xmin + c;
> >
> >   return xmin;
> > }
> >
> > I don't have code for the 2D version unfortunately.
> >
> > The 2D version (fitting a paraboloid to a neighborhood of more than four
> > points total) is overspecified, as is the 1D version if fitting a parabola
> > to more than three points (e.g. using two neighbors on either side of the
> > brightest pixel).  Thus you need to do some sort of regression to find the
> > best fit.  I'm sure there is code out there to accomplish this.
> >
> > Luke
> >
> >
> > On 9/5/07, linaccess at yokoy.de <linaccess at yokoy.de> wrote:
> > >
> > > Hello Luke,
> > >
> > > On Tue, 4 Sep 2007 16:11:34 -0700
> > > "Luke Hutchison" < luke.hutch at gmail.com> wrote:
> > >
> > > > Is the processing time for 640x480 the reason you're only using 90x30?
> > > >
> > >
> > > No, I am not at the point thinking about optimization the processing
> > > time.
> > > You don't want to dance in front of the camera to control the mouse
> > > pointer. You just want to move your head a few degrees as if you would look
> > > to the edges of the display without moving your eyes. Then the area
> > > recognized from the camera is very small. It is like moving the mouse only
> > > one or two millimeters to move the mousepointer over the whole desktop. To
> > > get more 'delta pixel' I need a mag lens, I think.
> > >
> > > regards,
> > >
> > > yokoy
> > >
> > >
> > >
> > > > You can actually dramatically increase the precision to which you can
> > > read
> > > > back the bright point's location by fitting a paraboloid to the
> > > intensity
> > > > values in the neighborhood of the brightest pixel, then reading off
> > > the
> > > > location of the extremum of the paraboloid.  You will get at least one
> > > order
> > > > of magnitude more accuracy that way than looking at the integer coords
> > > of
> > > > the brightest pixel (perhaps as much as two orders of magnitude).
> > > >
> > > > Luke
> > > >
> > > >
> > > > On 9/4/07, linaccess at yokoy.de <linaccess at yokoy.de> wrote:
> > > > >
> > > > > Hello Mary Lou,
> > > > >
> > > > > On Tue, 04 Sep 2007 12:13:34 -0400
> > > > > Mary Lou Jepsen <mljatolpc at gmail.com> wrote:
> > > > >
> > > > > > lenses are cheap.  it depends on what exactly you are doing with
> > > the
> > > > > > software.
> > > > >
> > > > > tracking a little shiny point at the head and transform it into
> > > > > mousepointer movements. Here is the description:
> > > > > http://www.olpcaustria.org/mediawiki/index.php/Headtracker
> > > > >
> > > > > With the XO camera we typicaly use only 90x30 pixel from the 640x480
> > > > > pixel. So I want to magnify the operative area with a lens.
> > > > > Here is a picture of the area:
> > > > >
> > > > > http://www.olpcaustria.org/mediawiki/index.php/Headtracker#magnification_lens
> > >
> > > > >
> > > > >
> > > > >
> > > > > > American Science and Surplus is a good way to experiment:
> > > > > > http://sciplus.com/category.cfm?subsection=21
> > > > > >
> > > > >
> > > > > thank you for that link. A plastic lens is what I am searching for.
> > > > >
> > > > >
> > > > > > then to china for mass production at very low price point.
> > > > > >
> > > > > >
> > > > > > - Mary Lou
> > > > > >
> > > > > >
> > > > >
> > > > > regards,
> > > > >
> > > > > yokoy
> > > > > --
> > > > >
> > > > > _______________________________________________
> > > > > Devel mailing list
> > > > > Devel at lists.laptop.org
> > > > > http://lists.laptop.org/listinfo/devel
> > > > >
> > > >
> > >
> > >
> > > --
> > >
> > >
> >
> 


-- 
 



More information about the Devel mailing list