[sugar] Release process

Michael Stone michael at laptop.org
Wed May 28 10:32:53 EDT 2008


On Wed, May 28, 2008 at 02:54:24PM +0200, Marco Pesenti Gritti wrote:
> On Wed, May 28, 2008 at 2:03 PM, Michael Stone <michael at laptop.org> wrote:
> > My experience over the last few months has been that a centralized
> > unstable build stream is worth less than it costs to maintain using the
> > tools we've built today because it tends to aggregate changes of widely
> > varying quality without recording which changes are good and which are
> > bad. I now think that we are better served either by relying wholly on
> > decentralized topic builds for unstable development,
> 
> We will try to find time at linuxtag to experiment with these for Sugar.

Awesome. Let me know if you get stuck as I can probably unstick you.

> > on unstable build
> > streams under the manual control of individuals and teams,
> 
> How is this different then joyride? Are these topic streams?

As Scott said, Joyride is somewhere between an unstable build stream and
a testing build stream. It's also under automated control insofar as it
automatically pulls in changes made by anyone with commit access to the
dist-olpc2 koji branch and to the joyride dropboxes on dev.

I think of topic streams as being things like Dennis' F-9 branch, my
rainbow branch, Bernie's X branch, etc. (One could plausibly argue that
Joyride is a topic branch for the Sugar UI redesign - do you think this
is true?) 

> > or on an
> > automated unstable streams only if they automatically quarantine
> > breakage.
> 
> Can you elaborate on how breakage would be quarantined? How difficult
> it will be to build infrastructure for it? Do we have time to do it
> for August?

The basic idea is to give your build system enough information to
automatically revert packages that look buggy. Debian does this by
teaching their build system to talk to their bug tracker and by teaching
their bug tracker how Debian packages work.

A second approach is based on using buildbot/tinderbox continuous
integration testing with automated test suites to qualify or disqualify
changes.

Both approaches give your build system enough information to rapidly
revert packages that look buggy; however, both systems continue to
suffer from the "garbage in, garbage out" problem.

Consequently, I think that we are better served by manually improving
the quality of the build stream inputs by encouraging people to do
individual testing builds (or just to publish packages for review on top
of an existing build) before pushing their changes into a shared build
stream for wider review. (Think of this as the kernel maintainership
model where changes originate small and private, then manually 'bubble
up' in into wider and wider testing.)

> >>
> >> Which build branch requires your approval and which doesn't?
> >
> > Only the ones that I'm maintaining.
> 
> And you maintain the stable builds for 8.1.1 and 8.2.0, correct?

Correct.

> >> I'm fine with personal negotiation, but we need to document how
> >> maintainers are supposed to negotiate inclusion in the builds and
> >> through which tools it concretely happens.
> >
> > Fair enough. I'll propose a first draft:
> >
> >  1. Contact me on devel@ or in the public IRC channels when you want to
> >     negotiate. I'll either tell you that I'm busy or I'll talk with
> >     you. You should be prepared to explain what your changes do and why
> >     you think they're good.
> >
> >  2. After we talk, we'll each have a better idea of how things will
> >     proceed, e.g.
> >
> >       * when you'll have packages ready for me to try out,
> 
> We need to tell people how they should build these packages and how to
> let you try them out (provide a custom build, get them in one of the
> unstable streams, just provide one or more rpms to install on the base
> of the current stable build...).

It's going to vary from task to task. For small things, I'm perfectly
happy receiving nothing more than a package to try out. For something
big like the Sugar UI redesign, I'm going to need something a bit more
systematic. Maybe we could settle make a list of example changes and the
packaging events through which they were qualified?

e.g.:

Keyboard       - reviewed patches, then Koji-built packages tested by
                 the submitter (Sayamindu), then test builds made by
                 Dennis

olpc-configure - Koji-built packages placed in joyride for
                 several weeks,

Touchpad       - patches, then an installation script to devel, then
                 packages for joyride by the release manager, then
                 inclusion in test builds along with a list of fixed
                 bugs

Wireless       - several revisions of the wireless firmware with manual
                 smoke testing by the submitter, and several kernel
                 patches to the stable kernel, a kernel build by the
                 release manager, inclusion in a testing build by
                 Dennis, then more serious independent network testing
                 this week

> >       * what bugs I should try to test carefully,
> >       * what areas I need to watch for regressions,
> 
> Do you still want test cases for each change? If so I think it should
> be made more clear.

Test cases make me happy; however, you can make me happy to accept your
changes in other ways.

> Also, executing the test cases and reporting to you about the results
> is maintainer responsibility (either personally or through
> volunteers)?

It will vary from contract to contract. As I've said before - we'll work
together to scrape up test resources. However, maintainers should
definitely expect me to ask them to take some responsibility for testing
their work.

Michael

P.S. - Surely Marco isn't the only one with questions about how this
thing is going to work!


More information about the Sugar mailing list