[Testing] [olpc-nz] Improving the reporting of test results

David Farning dfarning at activitycentral.com
Sun Jan 2 15:31:50 EST 2011


On Tue, Dec 21, 2010 at 10:51 PM, Sridhar Dhanapalan
<sridhar at laptop.org.au> wrote:
> We at OLPC Australia have been the recipient of some really good
> testing feedback (especially from NZ) on the lists. However, I also
> perceive a major weakness in the process. My central question is, how
> do we ensure that the feedback actually gets to the relevant people in
> a useful way?
>
> The executive summary is that all bugs/issues should be reported in an
> issues tracking system (redmine, trac, bugzilla, etc.), so that the
> developers can easily tend to and manage them. I'll explain...
>
> The basic develop-test-report cycle goes like this:
>
>  1. software/hardware is developed
>  2. testers give it a spin
>  3. testers note their results
>  4. developer easily sees the feedback
>  5. developer can easily manage the feedback
>  6. based on the feedback, GOTO 1
>
> What I see is that steps 1-3 are being handled quite well.
>
> There appears to be a big hole between steps 3 and 4. This is because
> the testing feedback is only posted as prose on the mailing lists.
> There's nothing to be sure that the relevant developers are seeing
> those messages. Given the sheer volume of list messages, they probably
> aren't.
>
> Even if the developer does see the feedback, how does (s)he manage it?
> This is what issues tracking systems are for. If an issue is properly
> reported, it can be properly managed along with the other tasks
> required for the project. Statuses, priorities and owners can be
> assigned. Now that it's in the system, it won't get lost. This is all
> neatly described at http://wiki.laptop.org/go/Reporting_bugs
>
> If step 4 occurs but not step 5 (i.e. prosaic feedback is received,
> not input into an issues tracking system), it is up to the developer
> to turn the prose into a bug report. This is time consuming and
> detracts from the act of development (step 6).
>
> In summary, I strongly urge testers to fully report their findings in
> the appropriate issues tracking system. In doing so, you make sure
> that the developers see your findings (thus making your testing
> worthwhile) and can easily act upon them.
>
> Examples of tracking systems to submit to:
>
>  OLPC: http://dev.laptop.org/
>  Sugar Labs: http://bugs.sugarlabs.org/
>  OLPC Australia: http://dev.laptop.org.au/

This is an interesting and challenging question with no right answer.
Different projects and ecosystems come up with their own individual
solutions.   To step back and look at the challenge from a meta level.

QA is inherently difficult for volunteer projects.  There are three
factors to consider:

1. 'Value' of the issue to individual participants.
2. 'Expertise' of individual participants.
3. 'Incentive' for participants to work on issues.

The basic question is how much incentive does an individual have to
identify and fix an issue.  Most bugs go through a process of:
1. Feedback - The bug is reported.
2. Fix - The bug is fixes.
3. Finished product - A fixed version of the software is delivered to
testers/users.

As we can see from the above thread (and experience) different bugs
take different paths through this workflow... which is OK.  The key is
that each bug must be brought to the attention of someone who will see
that it makes it 'through' the next step.

At one end of the spectrum there are two participants:
1. Reporter
2. Developer

Gray Martin represents this work flow.  Every week he follows the bug
tracker and mailing lists and fixes bugs in the activites he
maintains.... then uploads new versions to ASLO.

At the other end of the spectrum we have the various (usually complex)
systems developed by proprietary software development companies.

The Sugar/OLPC ecosystem is somewhere in the middle.  It has 4 basic
participants.
1. Reporter.
2. Project Manager (missing).
3. Developers.
4. Image Builder.

My Guess is that consumers, such as deployments, will need to band to
together to hire a PM to insure that the test results are prioritized,
forwarded to developers (in a format that is useful to them), fixed,
and shipped as an update.

david


More information about the Testing mailing list