OLPC and GLX
bernie at laptop.org
Sun Jan 20 21:11:06 EST 2008
(please forgive minor mistakes as I was in a hurry and
I had no time to double-check some of the facts reported
below. Feel free to point out corrections, of course).
Bryan Duff wrote:
> After compiling in mesa source to get GLX working on the FC7-based
> builds. Running `glxgears -fullscreen` I get ~25 fps. Compared to
> Ubuntu which gets 65 fps, this is rather poor.
> I think the Ubuntu performance shows that 3D (albeit simple 3D) is very
> possible and worthwhile.
Nobody ever thought 3D was impossible or unfeasible on an
full blown x86 CPU at 466MHz, when it very clearly was
possible even on the 8bit, 1MHz, C64 with no hardware
floating point support.
My question is: why are you trying so hard to achieve
*unaccelerated*, *indirect*, *GLX* support when it would
be much easier and faster to just use OpenGL in-process
and then XPutImage the resulting buffer?
> I've tried the amd and fbdev drivers - they make no performance
> difference. Neither does i386 versus i586 (w/ mmx and 3dnow! included
> in cflags).
Of course it makes no difference because the server-side
software renderer gets used in both cases. And this
renderer happens to be exactly the same code of Mesa, only
built in a slightly different way.
There's no need to run benchmarks to find this out.
A simple inspection of the source code would reveal that
there are no hardware accelerated paths in Mesa for the
Geode. Although, some primitive form of GL acceleration
could certainly be implemented for very trivial operations.
> I haven't tried DRI. I don't understand how it would help if I
> understand DRI correctly, but perhaps I'm missing something. I have no
> idea if processes that are re-niced automatically, or have some sort of
> resource limitation.
Uh? DRI is the in-kernel support for DMA, command queue
completion interrupts and now TTM and memory management
for *direct* rendering clients.
On the other hand, the indirect (i.e. server side)
rendering path you're using historically happened without
DRI's help because it was unaccelerated anyway.
The X server still had some interaction with DRI in
order to coordinate window management operations with
direct rendering clients.
Recently, Kristian Høgsberg did a huge architectural shift
in order to add 3D acceleration to the server-side Mesa
renderer. This is AIGLX.
While AIGLX is somewhat slower than direct rendering due
to the overhead of the GLX wire protocol, and generally
not recommended for 3D intensive games, it is no doubt
the way to go to achieve 3D desktop effects, and to
redirect 3D clients to off-screen buffers so that their
output can be further processed.
At this time, the AIGLX effort is halfway finished:
the basic features work beautifully and efficiently,
but XVideo does not yet play by the rules, and mouse
input keeps ignoring the 3D transformations.
Another big show stopper for a fully 3D Linux desktop
is that it needs aggressive graphics card memory usage
in a multitasking environment, which requires a
sophisticated memory manager with a smart swap-in
policy to keep things fast on a crowded desktop.
Although an implementation already exists, the
underlying design is still being debated and could
possibly undergo further changes before it will find
its way into the kernel.
|___| Bernardo Innocenti - http://www.codewiz.org/
\___\ One Laptop Per Child - http://www.laptop.org/
More information about the Devel