Journal activity profiling
riccardo.lucchese at gmail.com
Wed Aug 13 07:11:04 EDT 2008
On Wed, 2008-08-13 at 12:13 +0200, Tomeu Vizoso wrote:
> On Tue, Aug 12, 2008 at 12:10 PM, riccardo <riccardo.lucchese at gmail.com> wrote:
> > Hi!
> > Testcase:
> > Fill the journal with hundreds of entries and automate the
> > GtkScrollWindow in the journal to scroll one entry at time on a timeout.
> > Build: joyride-2281
> > + Macro investigation of cpu usage
> > A picker graph taken during the test can be found at:
> > http://dev.laptop.org/~rlucchese/journal_scrolling/journal_scrolling.picker.svg
> > (http://dev.laptop.org/~rlucchese/journal_scrolling/journal_scrolling.picker)
> > The following tab shows cpu usage of the 4 processes that took more cpu
> > time during the sampling time:
> > ($ grapher -c4 -n -r cpu -i journal_scrolling.picker)
> > tot% ps% cmdline
> > -----------------------
> > 68.2 python /usr/bin/sugar-activity journalactivity...
> > 88.6 20.4 /usr/bin/X :0 -fp built-ins -wr ...
> > 94.3 5.7 picker
> > 99.5 5.2 python /usr/bin/datastore-service
> > >From the interactions with X it seems the journal activity spends a
> > sensible amount of time redrawing its entries.
> Yeah, we should be much smarter about that. Right now, we cache three
> screens worth of DS entries so we reduce the frequency with which we
> call the DS. But we redraw the whole screen at every scroll.
> Do you have any idea for redrawing less often?
what do you mean ?
> > The test runs slower when the journal has more entries than when it's
> > almost empty, why is it ?
> Not sure, perhaps because there are more entries in the DS so the
> query takes more time? Can you check?
> > + cProfile statistics (KCG format) for the journal activity:
> > http://dev.laptop.org/~rlucchese/journal_scrolling/journal_scrolling.cProfile
> > functions ordered by self-time:
> > 17.2 gtk.main()
> > 16.5 cairo.Context.paint()
> > 4. find in gettext.py
> > 3.9 set._jobject of Journal.activity/collapsedentry.py::228
> > 2.9 built-in method do_size_allocate
> > 2.9 _expand_lang gettext.py
> > 2.8 join in posixpath.py
> > 2.6 posix.stat()
> > 2.1 _update_color of Journal.activity/collapsedentry.py
> > 1.7 normalize in locale. py
> > 1.6 exists in posix.path.py
> > 1.5 set._jobject of Journal.activity/collapsedentry.py::323
> > --------------------------------------------------------------
> > 59.8%
> > I find interesting to note that 28.5% of the total time goes to
> > `get_date' of Journal.activity/misc.py because of the ngettext overhead.
> > Perhaps translations could be cached ?
> Totally, but caching should already happen inside python's gettext:
from cProfile one can see every call to gettext.translation(..) always
calls gettex.find() and spends there most of the time.
Looking at the source, and if I'm not missing anything, the call to
`find' isn't conditioned and the call-parameter 'languages' always
defaults to None thus activating the logic to `reload' all languages.
> If that isn't working for any reason, we would need to cache it
> locally in sugar.util. Can you check it is slow in Spanish as well?
> You may have found a very nice low hanging fruit ;)
> > + Sysprof statistics:
> > http://dev.laptop.org/~rlucchese/journal_scrolling/journal_scrolling.sysprof
> > Not very interesting this time.
> Yeah, no C-level function takes an absurd amount of time. It's python
> code that makes too much work than is really needed.
More information about the Devel