For review: NAND out of space patch.

Greg Smith gregsmitholpc at
Tue Jul 22 08:13:00 EDT 2008

Hi Chris et al,


Can you walk me through the exact steps that the user would experience 
if this script was installed?

That is, assume they download file and the disk becomes full. On the 
next reboot what do they see and what happens?

In terms of which files, I think the oldest (or maybe LRU as they say in 
caches) would be better than the largest. Can we do that (e.g. delete 
oldest then iterate until x MBs is free)?

BTW they already have a plan for a cron job and script to popup a dialog 

Deleting large rarely used system files will not solve this problem. The 
space will just get used up again until there are no more large rarely 
used files left. It can buy us a week or two but wont solve the problem 
longer term.

I think we have a strategy with the warning plus a recovery mechanism 
that deletes files. Let's get the exact work flow in place and run it by 
the Latu team for their OK.

Thanks a lot for the quick response on this one. We had just 2 days 
warning then it blew up! If we can show the deployments good 
responsiveness to their concerns it will help our relationship long term.


Greg S

> Date: Mon, 21 Jul 2008 22:21:33 -0400
> From: Chris Ball <cjb at>
> Subject: For review: NAND out of space patch.
> To: devel at
> Message-ID: <86vdyyacb6.fsf at>
> Content-Type: text/plain; charset=us-ascii
> Hi,
> Here's a small Python script that acts as a final fail-safe in the event
> that the datastore is full and we can't boot because of it, by deleting
> datastore files largest-first until we cross a threshold of how much
> free space is "enough".  It could be incorporated into the Python init
> process.  (See #7591 for more detail.)
> Caveats:
>    * Deleting a file from the datastore doesn't delete its entry in the
>      index.  Resuming a Journal entry with no corresponding file usually
>      produces a blank document in the activity being resumed.
>    * This doesn't try anything outside of the datastore, such as the
>      excellent suggestion of identifying unnecessary large files in the
>      build that could be deleted.  We should of course try that first.
> Please review.
> - Chris.
> #!/usr/bin/env python
> #
> # If the NAND doesn't have enough free space, delete datastore objects 
> #              until it does.  This doesn't modify the datastore's index.
> # Author:      Chris Ball <cjb at>
> import os, sys, statvfs, subprocess
> THRESHOLD = 1024 * 50 # 50MB
> PATH = "/home/olpc/.sugar/default/datastore/store/*-*"
> def main():
>     # First, check to see whether we have enough free space.
>     if find_freespace() < THRESHOLD:
>         print "Not enough disk space."
>         lines = os.popen("du -s %s" % PATH).readlines()
>         filesizes = [line.split('\t') for line in lines]
>         for file in filesizes:
>            file[0] = int(file[0])     # size
>            file[1] = file[1].rstrip() # path
>         filesizes.sort()
>         filelist = [file[1] for file in filesizes]
>         while find_freespace() < THRESHOLD and len(filelist) > 0:
>             delete_file(filelist.pop())
> def find_freespace():
>     # Determine free space on /.
>     stat = os.statvfs("/")
>     freebytes  = stat[statvfs.F_BSIZE] * stat[statvfs.F_BAVAIL]
>     freekbytes = freebytes / 1024
>     return freekbytes
> def delete_file(file):
>     # Delete a single file. 
>     print "Deleting " + file
>     try:
>         os.remove(file)
>     except os.error:
>         print "Couldn't delete " + file
> def reboot():
>     os.popen("reboot")
> main()
> reboot()

More information about the Devel mailing list