Trac: release management

Martin Dengler martin at martindengler.com
Thu Jun 5 18:57:26 EDT 2008


On Thu, Jun 05, 2008 at 04:25:53PM -0400, Garrett Goebel wrote:

> ... I'll write you a query which will give all the
> non-closed tickets which have never been changed by the owner.

Are you hoping to get OLPC management more justification for hiring
more people from this metric?  Or convince others that OLPC is
overworked?

> Whatever you want to call it, you might find it useful to track the
> scope and complexity of the changes required to fix an issue. Priority
> doesn't get at that. It would allow you to collect historic data which
> could be used to project how much time tickets will take to be
> implemented and how many bug hours you'll get per change.

Do you know of any situations where this type of information is
usefully collected?  It sounds like trying to do a number of chained
correlation exercises (complexity/scope estimate, complexity/scope
actual, time to fix estimate, time to fix actual) that are based on
partially subjective, known-hard-to-observe/predict data and expect to
come up with something useful.  More power to you if you succeed - you
will be able to make millions consulting / selling your software to
project management-focused groups.  Have you ever done this analysis
before?

> >> How many Full Time Equivalent hours does a given developer represent?
> >
> > A guesstimate: about 25 hrs/wk of coding and 30 hrs/wk of talking for
> > social folks, maybe 30 hrs/wk of coding and 10 hrs/wk of talking for
> > contractors; and 5-8 full days off a month (including weekends).
> 
> Is there any list of developers and which slot each fit into?

Why?  What is the use of asking questions that are somewhat private (a
co-worker's opinion as to who's social or not) and unactionable by
you?  These are actually rhetorical questions, so let me get to the
point (below)...

> >> What components are the given developers capable of working on?
> >
> > I don't understand this question.
> 
> You've got folks who have particular areas of expertise. Or to put it
> the other way, developers who can work in certain areas but not
> others. If your Trac ticket classifies a ticket as belonging to a
> particular area, you can then project how many FTE's you've got on
> hand to work in that area.
> 
> I realize that this being an open source project leaves a lot open
> ended. But if you collect the data in a way that you can get at it
> effectively, you can use historic data to verify your assumptions and
> track and make projections against non-employee/non-contractor
> developers as well.

You could, if 1) it were feasible to collect; 2) its analysis was a
tractable problem; and 3) it analysis had (significantly) greater
benefit than cost.

1) is possible to collect in this case (who has worked on what) but
not (I contend) in your other point (predicting future development
speed/progress)

2) tractability: is highly unlikely to be the case, for both inherent
(individual productivity over time has huge variance, high
periodicity, significant auto-correlation (positive and negative), and
other issues I could think of not OTTOMH) and empirical (enough people
have had enough time to make enough money to make it likely that if
they could've, they would've) reasons

3) benefit: You have just described a way to determine how many FTEs
are available for which areas in a way that is expensive, onerous on
the measured, of highly questionable, undemonstrated feasibility, and
of highly questionable accuracy.  On the other hand, people have been
answering this question on projects much larger by just counting the
paid FTEs and a back-of-the envelope estimation of the unpaid
contributors, which has none of the disadvantages of, and many more
advantages than your proposed method.

This meta discussion takes valuable time - I don't think it's worth
the costs, given all of the above (yeah, I know this is hypocritical,
but this is the internet so I can do that :)).

> >> How long does the assigned developer think the specific ticket will
> >> take to complete? How long did it take?
> >
> > The limiting factors seem to me to be:
> >
> >  a) how long is the critical path of changes necessary to close the
> >     ticket?
> >  b) how overloaded is the required developers?
> >  c) how frequently are the required developers task-switching?
> 
> I was ambiguous. What I meant was a).

How unlikely is it that a) be knowable with a useful degree of
accuracy before the ticket is closed?  It's pretty darn unlikely, I
contend.

> It'd be nice if there were a field in the ticket for the developer to
> note down how long they think they actually worked on a ticket.

If anybody actually did so, why would you expect people to be 1) able
to be accurate; or 2) using sufficiently similar definitions of "how
long I worked on this" to be comparable?

> If you combine this with the earlier mentioned field for
> scope/complexity (difficulty) then you can make some projections on
> how many FTE outstanding ticket hours you've got based on historic
> data. And you can assign each 'difficulty' level an average time to
> completion based on a running aggregate of the historic data.

You assume that this information, gathered subject to the limitations
above, is useful for planning, given the amount of *future* unknowns
and changing circumstances in a project of this/any significant size
or complexity.  Even with perfect data it'd just be an estimate, and
so subject to outside forces whose effects have huge (theoretically
infinite) variance, making it far less able to justify any high data
collection costs.

> You can also make some projections on the average number of defect
> hours you can expect will eventually be entered based on how the
> ticket is classified.
> 
> Put it together with allowing tickets to have a 'schedule' where
> schedule is YY-MM (ex: 08-07) And you can hope to create a realistic
> schedule of what you hope to accomplish month by month.

I contend there is absolutely no plausible chance the type of data
you're proposing to collect, about the type of process whose outcome
you're trying to predict, will actually help predict such a process to
a usefully accurate and precise (monthly progress per ticket) extent.

> >> How long must a ticket sit dormant before it gets bumped and someone
> >> takes notice?
> >
> > It's not a matter of taking notice. It's a matter of being reminded at a
> > time when there's nothing more pressing on the priority queue.
> 
> I'm going to assume that we both agree that some action should
> initially take place.

This is called triaging, in case you want to search to see how often
it's discussed or considered. It's a lot.

> ... I think the person who filed it should be able to see within a
> few days whether or not the ticket is accepted or rejected, who it
> has been assigned to and how it has been prioritized.

They can see this.  Do you mean to add ", within a few days of filing
a ticket"?  Your report mentioned at the start of this email could
tell you that easily.

> 
> Define:
> o  who should be reminded
> o  'nothing more pressing'
> o  'priority queue'
> 
> And then we can see if it can be automated...
> 
> I'm probably not the only person who has ignored things in my queues
> because I'd rather work on the interesting problems or the low hanging
> fruit.

I'm not sure how this paragraph relates to the previous point.  You're
implying that the triaging process's deficiencies are impacting the
unknowable platonic ideal Most Useful Ordering of work items?  Or are
you implying that the prioritistion process (that attempts to achieve
this MUO) is compromised by the limits of the triaging process?  Then
fix the triaging process.

As an aside, an ordering function related to interesting + low hanging
fruit + "items about which managers/useful people bug me nicely but
persistently" qualities seems to be the best feasible, cost-effective
way of incentivising paid & unpaid contributors to approach this Most
Useful Ordering.

> > (Most developers seem to have actual work queues which are about 5
> > tickets long. In practice, their Trac ownership lists are often 30-100
> > tickets long. Go figure.)
> 
> I don't understand the difference between an 'actual work queue' and
> 'ownership'. Can you explain?

A ticket in trac has an owner (an attribute of the ticket).

> >> What is your rate of defects per change? How does that break down by
> >> severity and difficulty?
> >
> > Are you measuring by source commits, packages, test builds, candidate
> > builds, or releases?
> 
> Trac tickets.  Source commits might be better.

Why (are source commits better)?

> Some analysis of the composition of git changesets associated with a
> Trac ticket would be better.

What type of automated analysis do you propose?  If you mean  manual
analysis of each commit for defects, that's code review, which is done
already.

> >> Are tickets reviewed before being closed?  By someone other than the
> >> implementer. Who?
> >
> > See #7014 for an example of the problem.
> 
> Looks like the big problems are easier to solve if you identify them
> as a bunch of little problems problem.

That's either entirely obvious or exactly what the ticket's done or
both :).

> The ticket should probably have been broken down into lots of
> smaller tickets and then updated to list them as its blockers.

It is broken down into smaller tickets.  They are 'listed'.  You imply
both immediately in your next...

> There's mention of other tickets all throughout... but you're not
> using the 'Blocked By' or 'Blocking' fields...

If you're happy that's the case, update the ticket.  If you think
Michael would've done that in case it was and thus conclude that it's
not the right thing to do, then...this point seems pointless.
Assuming the blockers list is known to the degree one would want it
recorded for all time (as opposed to for immediate human
consuption/triaging *right now* only), how are you envisoning to make
use of it the future?

> [how can] the OLPC process [be] so broken as to allow a newly filed
> tickets to be completely ignored forever.

Are you claiming that your one ignored ticket, or a significant number
of ignored tickets (as you assume your report will demonstrate), is an
anomaly among viable, ongoing, value-delivering software projects?  In
my experience it's not, and when other prominent people have the same
complaint (google for "jwz cadt") I don't find that claim surprising,
and thus would not assume that a lot of resources should be expended
to fix it (as it's not a necessary - or, separate point, sufficient -
precondition for success).  Clearly fixing it is a Good Thing, but I
think a periodic report of untriaged trac items and volunteers to
triage them (google for "Sugar BugSquad" or maybe "OLPC Support Gang")
would be much better focus for motivated and able people (such as
perhaps yourself).

> I wouldn't pretend to know what Scott should be doing with his time. I
> was only asking that the OLPC improve its process so it does not waste
> mine.

That's a very fair desire, IMHO. I personally thought your initial
email was not so neutrally worded and invited such comments as you
received (see point about Everybody Gets Angry But Then Calms Down
below).

> >> Besides, how can you hope to prioritize if you don't enumerate your
> >> resources,
> >
> >  http://wiki.laptop.org/go/Available_Labor-2008
> 
> Since (o) employee/contract is grouped... it might be useful to break
> them out. If you and I are both interested in my working on your issue
> tracking system to make it more useful for resource planning.

It's pretty trivial to break them out into another section in the way
you describe, and easier to read for the implied use case (who is
available to work on a given area).  Just Do It.

> > Also notice how I'm splitting release prioritization from development
> > prioritization into separate management problems.
> 
> No I didn't notice. I don't see the words 'release' or 'development'
> mentioned on that URL.

This separation was clear to me.

> It would be interesting if there were a mapping from how you've
> categorized these people to how Trac tickets are categorized. I was
> assuming the 'component' field could be used for that purposed.
> 
> 
> >> constraints, and interdependencies? How can you balance
> >> work queues if you can't quantify them?
> >
> >> How can we as outsiders expect our interactions with the OLPC to be
> >> addressed in a timely manner?
> >
> > By bartering for the time of the people whose help you need.
> 
> I can't even contact the owner of ticket #6454. Apparently, I'm
> supposed to go on IRC, hang out and hope to catch him.

As mentioned, you can just amend the ticket, or email devel@ saying
'I'm trying to reach mtd, but who is that idiot?'.

> Barter with what? My overly polite and friendly disposition :-) ?

For a start, yes!  You put a smiley so I assume you're not being
serious, which is a shame because yes, that's exactly what you should
use as part of your barter offering (and the fact - as I read it -
that you weren't may have led to the vehemence of some responses
(Everybody Gets Angry But Then Calms Down)...but I think we all
realize that frustration at an immediate/recent/repeated issue can be
excused (why I'd excuse both your and everyone's vehemence on this,
btw: you had a neglected ticket and others had a crowd shouting 'pick
me! pick me!' (shrek reference))) (previous parenthetical sponsored by
SBCL).

In case you haven't seen it, or it can be of use to others that may
not have seen it, this subtopic is well treated here:
http://www.catb.org/~esr/faqs/smart-questions.html

> > OLPC has no one tasked to track down abandoned tickets. Who should they
> > assign?
> 
> Who ever assigns tickets when tickets need to be assigned. Would that be you?

See my point about triaging, above.

> Make a sql query that does the work for you. Give me that copy of the
> Trac database and I'll take a swing at doing it for you.

Cool!  Thanks (as I/we all benefit from you helping OLPC employees
have one less thing to worry about, etc.).

> cheers,
> 
> Garrett

Martin

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.laptop.org/pipermail/devel/attachments/20080605/434fb548/attachment.sig>


More information about the Devel mailing list