[linux-mm-cc] First experience of compressed cache

Nai Xia nai.xia at gmail.com
Wed Apr 2 04:20:41 EDT 2008


On Wed, Apr 2, 2008 at 4:19 PM, Nai Xia <nai.xia at gmail.com> wrote:
> On Wed, Apr 2, 2008 at 3:45 PM, John McCabe-Dansted <gmatht at gmail.com> wrote:
>  > On Wed, Apr 2, 2008 at 2:46 PM, Nitin Gupta <nitingupta910 at gmail.com> wrote:
>  >  >  I don't yet have testing infrastructure for kernel allocator. I tried
>  >  >  systemtap some time back to instrument kmalloc code but that didn't
>  >  >  work as expected. I will now planning to use swap replay with some
>  >  >  additional helper mods to get these numbers. Swap Replay will also
>  >  >  allow us to reproduce same test results again easily.
>  >
>  >  Well, you could do that. Or you could just use these rough and ready
>  >  numbers from starting firefox on ubuntu-7.10 liveCD on 220MB VM:
>  >
>  >  CurrentPages:       7919
>  >  CurrentMem:        18345 kB
>
>  I do think this size has taken the TLSF roundup into consideration

sorry,  I do not think  :)

>  while ksize has:
>
>  in "compcache_make_request":
>
>  stat_set(&stats.curr_mem, stats.curr_mem + clen);
>
>  I think "clen" may become "fat" in tlsf_malloc.
>  but tlsf_malloc does not return its actual size.
>
>
>
>
>
>  >  PeakMem:           18345 kB
>  >  _K_Mem:            26043 kB
>  >
>  >  The _K_Mem is the memory use reported by ksize, assuming that we
>  >  allocate using kmalloc, calculated according to this function.
>  >
>  >  static size_t kmalloc_size(size_t klen)
>  >  {
>  >         void* m;
>  >         size_t ks;
>  >         m=kmalloc(klen,GFP_KERNEL);
>  >         ks=ksize(m);
>  >         kfree(m);
>  >         return ks;
>  >  }
>  >
>  >  This shows that kmalloc(klen,GFP_KERNEL) increases space required by
>  >  ~42%. This gives a good reason not to use kmalloc(klen,GFP_KERNEL).
>  >  Would you like me to investigate alternatives to GFP_KERNEL? I suspect
>  >  that we would at least want slices of sizes
>  >  4096,3276,2730,2340,2048,1820,1638,1489 (which can be produced from
>  >  16k slabs), and possibly a few slices that can only be produced from
>  >  32k slabs.
>  >
>  >  Ofcourse, once we start creating new "caches" (slice sizes) then we
>  >  create a new form of fragmentation. The kernel  avoids reaping pages
>  >  from caches with less than 10 pages free, so we might expect an over
>  >  head of at least 40k per cache, so just the 8 slice sizes above would
>  >  involve an overhead of over 8*40k=320k when in use. Not sure how to
>  >  measure this. Perhaps recreate kmalloc in userspace with your swap
>  >  replay. Is Swap replay included in compcache-0.3?
>  >
>  >
>  >
>  >  --
>  >  John C. McCabe-Dansted
>  >  PhD Student
>  >  University of Western Australia
>  >
>


More information about the linux-mm-cc mailing list