JFFS2 file sizes

Jörn Engel joern at logfs.org
Thu Jul 26 08:48:34 EDT 2007


On Thu, 26 July 2007 08:37:26 +0100, David Woodhouse wrote:
> On Wed, 2007-07-25 at 20:04 -0400, Jim Gettys wrote:
> > Jffs2's compression is OK, but as the block size of the compression
> > blocks is relatively smaller than a gzipped archive, for large objects
> > it's less efficient than gzip.  

The same is true even more dramatically for bzip2.  .bz2 images are
usually smaller than .gz images.  But when compressing only 4k chunks,
bzip2 almost always fares worse than gzip.

> > Dave Woodhouse may be able to give typical numbers (he wrote jffs2, and
> > we're fortunate to have him working on OLPC)..  And individually gzipped
> > small files may not do much better than jffs2.
> 
> You could use 'mkfs.jffs2' to spit out a JFFS2 image matching any given
> directory, which should give a fairly good estimate of size. As
> discussed on IRC last night, it's something like 68 bytes for every 4KiB
> page, plus the zlib-compressed size of that page.

...aligned to a 4-byte boundary.

> One of the improvements we want to make to JFFS2 is switching to 16KiB
> 'pages'. It means a bit of mucking around with the Linux page cache,
> since we're no longer keeping data in chunks of precisely the same size
> it'll be wanted in. But it should give us better compression and also
> speed up mounting and take a lot less RAM for metadata (since we'll have
> ¼ the nodes to keep track of.)

Compression should improve by 2-5% if my memory can be trusted.  Has
been a while since I did the tests.

Jörn

-- 
Public Domain  - Free as in Beer
General Public - Free as in Speech
BSD License    - Free as in Enterprise
Shared Source  - Free as in "Work will make you..."



More information about the Devel mailing list