JFFS2 file sizes

David Woodhouse dwmw2 at infradead.org
Thu Jul 26 03:37:26 EDT 2007


On Wed, 2007-07-25 at 20:04 -0400, Jim Gettys wrote:
> Jffs2's compression is OK, but as the block size of the compression
> blocks is relatively smaller than a gzipped archive, for large objects
> it's less efficient than gzip.  
> 
> Dave Woodhouse may be able to give typical numbers (he wrote jffs2, and
> we're fortunate to have him working on OLPC)..  And individually gzipped
> small files may not do much better than jffs2.

You could use 'mkfs.jffs2' to spit out a JFFS2 image matching any given
directory, which should give a fairly good estimate of size. As
discussed on IRC last night, it's something like 68 bytes for every 4KiB
page, plus the zlib-compressed size of that page.

One of the improvements we want to make to JFFS2 is switching to 16KiB
'pages'. It means a bit of mucking around with the Linux page cache,
since we're no longer keeping data in chunks of precisely the same size
it'll be wanted in. But it should give us better compression and also
speed up mounting and take a lot less RAM for metadata (since we'll have
ΒΌ the nodes to keep track of.)

-- 
dwmw2




More information about the Devel mailing list