[linux-mm-cc] [OFFTOPIC] SD wear-levelling

John Kielkopf john.kielkopf at gmail.com
Sun Sep 5 11:40:08 EDT 2010


On Thu, Sep 2, 2010 at 3:23 AM, Stefan Monnier <monnier at iro.umontreal.ca>wrote:

> Ah, now that makes sense.
> So yes, it would be good to be able to tweak Linux's swap handling so it
> tries to write in chunks that are erase-block sized and aligned, as far
> as possible.  Tho maybe trying to just maximize the chunk size (without
> paying particular attention to a particular size or alignment) would be
> good enough.
>

Rather than starting from scratch, could compcache be a useful starting
point?  The compression compcache provides should help greatly with the the
general bandwidth issues of any NAND based device, reducing the amount of
data transferred.  I'm assuming compcache already needs to re-map compressed
pages -- an area that could hopefully be tweaked to re-map into neat erase
block sized buffers before finally committing them to a block storage
device.

To recap my thoughts on swapping to NAND:

While we've established that write performance can be increased by paying
close attention to erase blocks, writes should still be avoided as much as
possible.  I can think of two ways to reduce the amount of data written:

1) Compress pages -- just like compcache does today, but for different
reasons.

2) Keep a unique hash* of each swapped page in memory. Before swapping out a
page, check if it has been swapped before. If it has, just map to the
already swapped entry**.  (I suspect, in memory constrained systems, that
the same data is swapped in and out very often.)

* While MD5, SHA1 and others have been used in storage deduplication
systems, I'm sure there will be a concern of hash collision/silent data
corruption no matter what algorithm is used.  A less expensive algorithm
could be used, along with a read verification of the page on systems where
hash collisions must be prevented at all costs.  Provided the hash does not
collide often, and knowing that reads are usually much less expensive than
writes, byte for byte verification of pages with duplicate hashes should
still improve performance -- especially if the pages are compressed before
being hashed.

** Pages would need to be left on block storage as long as possible, even
after being swapped back in.  Since NAND memory is relativly inexpensive,
using more space as a page cache shouldn't be a problem.

I've searched and have not been able to find a working implementation of
either.  The only compressed flash swapping system I found (see:
http://www.celinux.org/elc08_presentations/belyakov_elc2008_compressed_swap_final_doc.pdf)
was discontinued when Numonyx was aquired by Micron, and the source
code
was not and apparently will not be released under GPL.  I can find no "swap
de-duplication" projects.

There is room to improve performance in swapping to nand based block
devices, but I'm left wondering why it appears to have little interest.  It
seems a natural progression for a project like compcache.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.laptop.org/pipermail/linux-mm-cc/attachments/20100905/6c1f0d0a/attachment.htm 


More information about the linux-mm-cc mailing list