[linux-mm-cc] [PATCH 1/4] compcache: xvmalloc memory allocator

Nitin Gupta ngupta at vflare.org
Mon Aug 24 17:16:43 EDT 2009


On 08/25/2009 01:13 AM, Pekka Enberg wrote:
> On Mon, Aug 24, 2009 at 10:36 PM, Nitin Gupta<ngupta at vflare.org>  wrote:
>> On 08/24/2009 11:03 PM, Pekka Enberg wrote:
>>
>> <snip>
>>
>>> On Mon, Aug 24, 2009 at 7:37 AM, Nitin Gupta<ngupta at vflare.org>    wrote:
>>>>
>>>> +/**
>>>> + * xv_malloc - Allocate block of given size from pool.
>>>> + * @pool: pool to allocate from
>>>> + * @size: size of block to allocate
>>>> + * @pagenum: page no. that holds the object
>>>> + * @offset: location of object within pagenum
>>>> + *
>>>> + * On success,<pagenum, offset>    identifies block allocated
>>>> + * and 0 is returned. On failure,<pagenum, offset>    is set to
>>>> + * 0 and -ENOMEM is returned.
>>>> + *
>>>> + * Allocation requests with size>    XV_MAX_ALLOC_SIZE will fail.
>>>> + */
>>>> +int xv_malloc(struct xv_pool *pool, u32 size, u32 *pagenum, u32 *offset,
>>>> +                                                       gfp_t flags)
>>
>> <snip>
>>
>>>
>>> What's the purpose of passing PFNs around? There's quite a lot of PFN
>>> to struct page conversion going on because of it. Wouldn't it make
>>> more sense to return (and pass) a pointer to struct page instead?
>>
>> PFNs are 32-bit on all archs while for 'struct page *', we require 32-bit or
>> 64-bit depending on arch. ramzswap allocates a table entry<pagenum, offset>
>> corresponding to every swap slot. So, the size of table will unnecessarily
>> increase on 64-bit archs. Same is the argument for xvmalloc free list sizes.
>>
>> Also, xvmalloc and ramzswap itself does PFN ->  'struct page *' conversion
>> only when freeing the page or to get a deferencable pointer.
>
> I still don't see why the APIs have work on PFNs. You can obviously do
> the conversion once for store and load. Look at what the code does,
> it's converting struct page to PFN just to do the reverse for kmap().
> I think that could be cleaned by passing struct page around.
>


* Allocator side:
Since allocator stores PFN in internal freelists, so all internal routines
naturally use PFN instead of struct page (try changing them all to use struct
page instead to see the mess it will create). So, kmap will still end up doing
PFN -> struct page conversion since we just pass around PFNs.

What if we convert only the interfaces: xv_malloc() and xv_free()
to use struct page:
  - xv_malloc(): we will not save any PFN -> struct page conversion as we simply
move it from kmap wrapper to futher up in alloc routine.
  - xv_free(): same as above; we now move it down the function to pass to
internal routines


* ramzswap block driver side:
ramzswap also stores PFNs in swap slot table. Thus, due to reasons same as
above, number of conversions will not reduce.


Now, if code cleanup is the aim rather that reducing the no. of conversions,
then I think use of PFNs is still preferred due to minor implementation details
mentioned above.

So, I think the interface should be left in its current state.

Thanks,
Nitin


More information about the linux-mm-cc mailing list