[Testing] Post-meeting conversation: Test results reporting

Mel Chua mel at laptop.org
Fri Dec 19 02:57:34 EST 2008


This is from the conversation immediately after our test community 
meeting today, on test result reporting metrics. Logs are at 
http://meeting.laptop.org/olpc-meeting.log.20081218_1859.html and notes 
produced are below.

This may or may not be coherent, and is provided more or less for 
context. Tabitha, the summary is "keep making text summaries of Activity 
tests as you've been doing, if that's what you feel comfortable with for 
now, for the Welly testers - we'll make sure they get transmuted into 
the most developer-useful form."

Comments and clarifications welcome... I can't describe this much better 
right now - my brain's shot and my battery's about to die. Please ask 
questions if you have any / I'm being incoherent.

--Mel

Good Things So Far (and why they're good)
-----------------------------------------

Welly testers wrote up a spreadsheet for smoke testing <here>[1] (url 
needed).

We understand that it works really well for them because:

   1) it helps them write up more uniform results. this makes life 
easier for
      testers, developers, and other stakeholders looking at the results 
later.
   2) it helps new testers get started, by prompting them on what to 
look for.
      This in turn lowers the cost of getting a new tester started, 
making it
      more likely that experienced testers will spend time doing it.
   3) it makes it easier for them to see and aggregate results at the 
end, since
      new testers tend to be more familiar with spreadsheets than semantic
      mediawiki.
   4) it helps new testers get started because they can just write down 
their thoughts
      without having to learn a complicated system to put their results 
into at first.

Some thoughts:
--------------
   1) If these results were turned from plaintext into entries in a 
formal test
      results reporting system, it would be easier for other stakeholders
      (including release-mgmt people and other testers) to see how much 
testing
      has been done. This will also help other testers by helping them 
identify
      what testing hasn't yet been done, so they can better allocate 
their time.
         (some testers might also benefit, but I have trouble 
identifying them.)

   2) If information from these results were subsequently filed as 
well-written bugs,
      developers would be happy, because well-filed bug reports go to 
the right people
      without distracting the people who aren't interested.
      Also, it's convenient to be able to work with an e-mail workflow.
      Release management people would also be happy because bugs are the 
preferred
      way to handle resource allocation and work tracking.

   3) Therefore, we want a method that maximizes the chance that someone 
will
      turn test reports into filed bugs.

   4) However, we don't want testers to be out of the loop after a bug 
is filed,
      either by them or on their behalf.  We want them to be as involved 
in the
      loop of diagnosis, programming, testing and deployment as they 
want to be,
      which implies that they should understand the process of going 
through that
      loop, and how they *can* be involved in each part.

How Test Results Should Work (For Now; first pass):
---------------------------------------

* testers submit test reports, with little enforced structure
* the Community Testing group meets, looks it over, does triage and 
works out
   what's a legitimate bug
* the Community Testing group files bugs as necessary, taking note of the
   individual tester's willingness to take part in communication, and 
CC'ing
   them on the bug if they want to be.

-- cjb: (Hm.  This model of having testers be hands-off after they write 
a report
    is pretty much in conflict with our rationale.  What's up with that.)
-- mel: I'd phrase this as "There's a workflow within the test 
community," just
    as you have a workflow within the development community (as 
m_stone's trac tickets workflow
    wiki page so elegantly illustrates.) People can choose to 
participate in as
    little or as much of that workflow as they choose; I could imagine 
an awesome
    test volunteer deciding that they just want to triage, for example. 
What we
    are talking about here is separating the "Actually run and test the 
software"
    part from the "and report it to developers in the most useful 
format/form for
    them" part. Just like the person who makes the patch doesn't have to 
be the
    one that includes it in a build, etc.

cjb's developer idea of what a tester's utopia might be:
--------------------------------------------------------

Testers find a process that is unburdensome and helpful.  They know that 
they
can be as involved as they want to in the work done on their bugs.  They're
asked, if they want to be, things like "Do you think this change would solve
your problem?".  Their work is documented, shared, appreciated, and 
credited.
Someone running an OLPC/Sugar deployment can look up what tests have been
performed on a component; this information should even be part of the 
release
notes for software. ...

mchua's tester idea of what a developer's utopia might be:
----------------------------------------------------------

Developers find a process that is unburdensome and helpful - and usually
invisible in the background. By default, they're notified only of bugs
(via well-written, reproducible bug reports) on the components that they 
care
about, and this notification uses the communiation channel(s) they prefer
(trac, email, IRC, etc.) The process by which the bug report was created is
transparent, and they can go back to the testers who carried the bug through
that process and open up a dialogue with them, asking questions and 
going back
and forth as they fix the bug and then verify the fix. Their work is
documented, shared, appreciated, and credited, and they even get a follow-up
from someone (perhaps a tester) after their fix has later been pushed to a
release and deployed, saying "Thanks - and here's the difference you made to
the users you were doing this for!"




Addendum 2: Criteria for success:

mstone's proposal:
   We need to be careful to pay attention to results, not just efforts. 
For this
   reason, we should take care to judge collaboration protocols we invent
   according to efficiently they inform and motivate the total set of 
people who
   need to work together to solve the Big Problem, not according to how 
well they
   solve any individual small problem.
mchua agrees, though she's not yet sure how to measure whether something is
more informing or motivating than something else, in this context.


Addendum:  Rationale:
---------------------

Michael described how it's important to communicate much more than just 
by filing
a bug.  Things that are important that aren't Trac bugs are:

<m_stone> cjl: the other 3/4 include: communicating the results to
       deployers, e.g. through release notes, communicating them to hackers,
       e.g. by irc, mail, and tickets, and following through on the rest 
of the
       development life-cycle in order to get them fixed.

    (the other thing that I really wanted to communicate here is that 
perhaps the
    greatest and most satisfying opportunity for testers and developers 
to work
    together is _after_ the bug has been filed, during the ensuing 
search for a
    fix/workaround, testing of that product, release to customers, and 
subsequent
    support)
    +infinity.

<cjb> it would be pretty sweet to have a section in the release notes
       called "What The Testers Found"
<cjb> with a description of which groups tested which parts of the
       release, and what they thought and stuff
<cjb> that would be pretty grassroots, right there


More information about the Testing mailing list