[linux-mm-cc] I guess you have been following ksm.

Peter Dolding oiaohm at gmail.com
Fri Apr 17 07:03:49 EDT 2009


On Fri, Apr 17, 2009 at 4:45 PM, Nitin Gupta <ngupta at vflare.org> wrote:
> On Fri, Apr 17, 2009 at 4:02 AM, Peter Dolding <oiaohm at gmail.com> wrote:
>> http://lkml.org/lkml/2009/4/16/300
>>
>> ksm is using Linux copy on write system to merge pages.   compcache
>> currently takes page into swap virtual swap yes it costs no space but
>> it costing cpu time to get it back.   With blank pages in particular
>> it would be better not to.  Instead exploit the copy on write system
>> and connect all blank pages to the same memory block marked read only.
>>  When the memory page needs re accessing it will not have to be pulled
>> back from swap since it never really left.
>>
>
> I guess by blank pages you mean zero filled pages. For zero pages,
> compcache does not allocate any memory. Only cost we incur for
> zero pages is time for block I/O request to reach this virtual block
> device. COW will surely have lower CPU overhead but I don't think
> apps issue read on such blank pages too often.

Callocs pushed out of memory.   Normally its data being added to
calloced space cause the page to be pulled back.

Its the api in use by the application.  Some are like alter value so
read the calloc space before allocating new data.
>
>> The copy on write system also appears to provide something else
>> interesting.  ksm and compcache are both after allocation.   The
>> interesting question is if Linux kernel should provide a calloc
>> function.   So that on commit its is automatically stacked.  This
>> would massively reduce the numbers of blank matching pages.  Linux
>> system already has something that deals with malloc allowing over
>> commits until accessed.
>>
>
> Not sure if I understand you here. You mean all new allocation should
> be zeroed to play better with KSM?

Not all new allocations.   Allocations of calloc.  Current way is
allocate and zero.  So meaning large allocations consume multi-able
blocks when should consume not much 1 page and some page table
information..
>
>> compcache is useful so far ramzswap.   Support for a disk based zswap
>> could also be a god send.  Disk transfer speeds are limited.
>>
>
> compressed disk based swap is planned but has lot of additional complexity.
> For now, my focus is to somehow make ramzswap memory swappable to
> disk based swap (if present). For simplicity, code currently in SVN
> decompresses individual objects in a page before writing out to backing
> swap device.
>
>> There are many ways to get more applications into memory.  Looking at
>> the other end when memory is allocated and where application is loaded
>> into memory could provide savings without cost.
>>
>
> Sorry, I could not understand this.

The major advantage of cow is more data appears to be in place to
application.   No need to access swap many times without need.
>
>> Lets take a executable with a stack of pages the same.  Scanning tool
>> run over it locate the pages that can be stacked make executable
>> smaller have loader automatically join those pages up.   Remember
>> Linux does not send applications to swap.  Instead deallocates the
>> page and reads it back from executable file.   Swap only gets
>> application created data.
>>
>
> Yes, being as "swap disk", ramzswap has limitation of being able to
> compress anonymous (swap-backed) pages only.
>
>> There are still a lot of areas with memory Linux can do better.
>> Looking at targeting on commit saves cpu time since.   Restoring
>> duplicate pages using copy on write also saves on cpu time.  Merging
>> page duplicates instead of going into swap also saves cpu time.   More
>> pages in memory less swap system access.  Just compress everything
>> sent to swap is not the best solution for cpu time used.  Sending ram
>> to disk based swap is not the best solution either.   Goal need to be
>> use swap less and giving more data in memory the cpu can access.
>> There is no point having zswap if executables are still being cleared
>> from memory using disk.
>>
>
> For duplicate page removal COW has lower space/time overhead than
> compressing all individual copies with ramzswap approach. But this
> virtual block device approach has big advantages - we need not patch
> kernel and keeps things simple. Going forward, we can better take
> advantage of stacked block devices to provide compressed disk based
> swapping, generic R/W compressed caches for arbitrary block devices
> (compressed cached /tmpfs comes to mind) etc.

ramzswap is detecting at least 1 form of duplicate pages.  Working
with the ksm developer to provide a way to merge those on restore
would provide an extra way of taking advantage of ksm.  Currently ksm
runs like a scanner in memory scanning all the time even if there is
no need.  Duplication really does not matter while you have enough
ram.

Working well in combination with ksm can provide advantages for both.

Maybe page reduction should be placed as first operation in the swap system.

I kinda see a advantage to ksm and swap system interlinked.

Peter Dolding


More information about the linux-mm-cc mailing list