[linux-mm-cc] CCache problem

Nitin Gupta nitingupta.mail at gmail.com
Tue Jun 27 00:05:41 EDT 2006


Hi,

I have some doubts with ccache work.

In work posted in 'flux' branch of git tree, I've done simplification in locking
as described in commit message (and weekly report 4).

This is pseudo-code for one of lookup functions where I try to get 'proper'
locking. I have some questions on this. Once these are solved, I will
(hopefully) be able to get it right in other lookup functions too :)

Below are marked code portions where I have problem and questions are at last.

----------------------------
RLQ = read_lock_irq(&mapping->tree_lock);
RUQ = unlock
similarly WLQ and WUQ
---------------------------------

struct page * find_get_page(struct address_space *mapping, unsigned long offset)
{
	struct page *page=NULL, *new_page, **slot;
	void *buffer;	/* buffer for decompression */
	struct chunk_head *ch;

	RLQ();
	slot = (struct page **)radix_tree_lookup_slot(&mapping->page_tree,
						offset);
	if (slot) page = *slot;
	if (page) {
		page_cache_get(page);
		if (PageCompressed(page)) {

			ch = (struct chunk_head *)(page);	
			if (TestSetPageLocked(ch)) {
				/* Its locked, someone is already
				 * decompressing it. Wait till its done */

				RUQ();
				<---- Problem 1 --------->
				wait_on_chunk_head(ch);
				RLQ();

				/* slot now has decompressed page
				 * return slot (it may be NULL) */
				page = *slot;
				if (page) page_cache_get(page);

				/* free chunk_head if only we are the user */
				if (ch->count == 1) kfree (ch);
				RUQ();
				return page;
			}

			/* We have lock on chunk_head */
			/* now decompress it and set slot to resulting page */
			RUQ();
			
			<--- Problem 2 ------------>
			new_page = alloc_page();	/* single page */
			buffer = kmalloc (64KB or 5KB) /* acc. to LZO or WKdm */
			new_page = decompress(ch, buffer_page);

			take_misc_ccache_locks();
			free_chunks();
			merge_free_chunks();	/* descibed on wiki */
			free_misc_ccache_locks();
			
			/* acc. to page type: anon or fs-backed */
			Set_flags_for_this_new_page();

			/* like mapping, index, private */
			Set_other_fields_in_struct_page();

			WLQ();
			*slot = new_page;
			page_cache_get(new_page);
			
			/* free chunk_head if only we are the user */
			if (ch->count == 1) kfree (ch);
			WUQ();
			return new_page;
		}
	}

	/* Page is not compressed, or is NULL */
	RUQ();
	return page;
}


Problem 1: Can we busy wait here (spinlock vs semaphore)? : Decompression is
a quick operation but I doubt if its quick 'enough' to be spinlock'ed (esp. LZO)
Also, I have doubt if find_get_page() can sleep. If it can, should I use a
semaphore there?

Problem 2: These allocation can sleep (esp. under mem pressure).
If find_get_page() cannot sleep then how can we do decompression?

Please also have a look at the general locking above (I did not do it
carelessly on my part, so this is how I intend to do this in lookup functions).
Do you think there are problems in above code?

----------
Also, I will now start extending compress-test module
(as on CompressedCaching/Code) to implement the compression structure
(on on CompressedCaching). This should reveal implementation problems and get
some performance numbers :)  This work can then simply be copied into git
branch as was done for compression algos.
-------------

Best Regards,
Nitin Gupta


More information about the linux-mm-cc mailing list