[linux-mm-cc] First experience of compressed cache

Nitin Gupta nitingupta910 at gmail.com
Wed Apr 2 05:38:35 EDT 2008


On Wed, Apr 2, 2008 at 2:38 PM, John McCabe-Dansted <gmatht at gmail.com> wrote:
> On Wed, Apr 2, 2008 at 4:38 PM, Nitin Gupta <nitingupta910 at gmail.com> wrote:
>  >  >  CurrentPages:       7919
>  >  >  CurrentMem:        18345 kB
>  >  >  PeakMem:           18345 kB
>  >  >  _K_Mem:            26043 kB
>  >  >
>  >  >  The _K_Mem is the memory use reported by ksize, assuming that we
>  >  >  allocate using kmalloc, calculated according to this function.
>  >  >
>  >  >  static size_t kmalloc_size(size_t klen)
>  >  >  {
>  >  >         void* m;
>  >  >         size_t ks;
>  >  >         m=kmalloc(klen,GFP_KERNEL);
>  >  >         ks=ksize(m);
>  >  >         kfree(m);
>  >  >         return ks;
>  >  >  }
>  >  >
>  >  >  This shows that kmalloc(klen,GFP_KERNEL) increases space required by
>  >  >  ~42%. This gives a good reason not to use kmalloc(klen,GFP_KERNEL).
>  >  >  Would you like me to investigate alternatives to GFP_KERNEL? I suspect
>  >  >  that we would at least want slices of sizes
>  >  >  4096,3276,2730,2340,2048,1820,1638,1489 (which can be produced from
>  >  >  16k slabs), and possibly a few slices that can only be produced from
>  >  >  32k slabs.
>  >
>  >  Your patch almost serves the purpose. But it will be much more convincing if we
>  >  can show TLSF vs Kmalloc perf over a period of time instead of just
>
>  Perhaps. but to me 42% seems *too* convincing. The counter-argument
>  would have to be that comparing against GFP_KERNEL is a strawperson as
>  GFP_KERNEL is clearly not suitable for these purposes, and leaves open
>  the possibility that we might get good results from a reasonable
>  choice of slice sizes.
>
>  BTW, IMHO, they should make GFP_KERNEL more efficient by including
>  some slices of sizes (2^n)/3 and (2^n)/5, since GFP_KERNEL is rather
>  inefficient even for random use.
>

Flag GFP_KERNEL has nothing to do with slab sizes available. It simply
means that
the allocation can trigger I/O (swapping, flushing to filesystem) if
thats required to
satisfy the request.

>
>  >  the difference
>  >  in peak usage. For eg:
>
>  As mentioned else where CurrentMem actually means "over all
>  allocations, including ones that are freed".
>

Yes. But thats not what I wanted :) It forgot to decrement CurrentMem when we
do free() during write. I will correct this.

>
>  >  http://code.google.com/p/compcache/wiki/TLSFAllocator
>  >  which compares variants of TLSF after every write operation.
>
>
>
> --
>  John C. McCabe-Dansted
>  PhD Student
>  University of Western Australia
>


More information about the linux-mm-cc mailing list