<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-2" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<br>
<blockquote cite="mid:4A2745C2.4020700@laptop.org" type="cite">
<blockquote type="cite">
<pre wrap="">The kernel init improvements will certainly bring 15 other seconds.
Maybe some parallelisation of the sysvinit will save some time, say 5
seconds (low end estimation)
</pre>
</blockquote>
<pre wrap=""><!---->
Parallelization will not help at all if you are using JFFS2. The low
level NAND driver that JFFS2 uses busy waits for I/O, and then JFFS2 is
CPU-bound on the decompression step, preventing any useful concurrency.
The busy-wait could be changed to an interrupt - if only someone had
time to do the work and test it extensively. The decompression is going
to be CPU bound no matter what you do, so the only option is to arrange
for the important files not to be compressed (thus increasing the NAND
footprint).
</pre>
</blockquote>
<br>
Hi Guylhem!<br>
<br>
What I have been told: The busy waiting happens because there is no
scatter-gather support in the NAND driver so the interrupt rate is high
and it is faster to busy wait than to context switch. Probably it would
help to interrupt for large IO and busy wait for small IO but it needs
testing.<br>
I promise you that if you happen to make the required efforts to speed
up booting then I will finish my fixed LZO decompressor code. It would
make reading compressed files actually faster, just I am not a Linux
kernel developer so integrating that with Linux would be your job.<br>
<br>
BTW why the doctors cannot just close the lid and open when needed?<br>
<br>
</body>
</html>