[olpc-nz] Activities.sugarlabs.org editors

Rafael Ortiz rafael at activitycentral.com
Wed Feb 23 12:41:49 EST 2011


Hi Tom

On Wed, Feb 23, 2011 at 3:55 AM, David Farning <dfarning at activitycentral.com
> wrote:

> weird. this just came through today.  do you still have question that
> I could help you with?
>
> david
>
> On Mon, Feb 14, 2011 at 3:19 AM, Tom Parker <tom at carrott.org> wrote:
> > On Sat, 2011-02-12 at 08:43 -0600, David Farning wrote:
> >> We never had the resources to test new activities before release in
> >> a.sl.o, as a result activities are released before qa.  This has been
> >> causing increasingly more trouble.  As the quality assurance on a.sl.o
> >> falls, fewer deployments use it:(
> >
> > Releasing with no QA at all is a very undesirable situation.
> >
> > I think we are technically capable of performing approvals, but our
> > resources are quite limited. We meet every Saturday, so requiring our OK
> > would cause significant delays. In QA mode, is there a public "beta"
> > site where the activities are publicly available until they are
> > approved? I sometimes see several releases in one day, I don't know if
> > this is due to feedback from downloads via the aslo site, they rarely
> > have release notes to explain what is going on.
> >
>

We don't have that site, our thinking in that regard is that we should be
as transparent as possible and also as fast as possible publishing
activities. We have only a model of non-trusted and trusted activities
 descript in

http://wiki.sugarlabs.org/go/Activity_Library/Editors/Policy


About release notes we are working in a new aslo that makes them necessary
when uploading or updating new versions of activities.





> > If activities are going to be approved, what is the criteria for
> > approval? Obviously, the occasional releases that don't start or don't
> > work at all shouldn't be approved. Should the recent batch of games
> > which consume 100% cpu be allowed (I would say no)? What if the previous
> > version(s) also did so (much more difficult)? We could say a release
> > should introduce no new regressions, but what about new features that
> > have bugs? What about bugs that are fatal but rare (like the physics
> > core dump on scribble (vaguely recall this might be fixed now))?
> >
> > Are some activities more important and held to a higher standard (such
> > as the set that can't be deleted) and others less important and so held
> > to a lower standard?
> >
> > How many different releases should they be tested against? We can
> > dedicate a few XO-1s to different builds for this purpose, but we don't
> > have many XO-1.5s in Auckland to do that.
> >
> >
>

I Think this can be answered by the non-trusted, trusted model, we can also
fix regressions holding back new versions and if possible erasing those
versions that had introduced serious bugs  with admin permissions.


Cheers!.


> >
> >
> _______________________________________________
> olpc-nz mailing list
> olpc-nz at lists.laptop.org
> http://lists.laptop.org/listinfo/olpc-nz
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.laptop.org/pipermail/olpc-nz/attachments/20110223/c478aa5c/attachment.htm 


More information about the olpc-nz mailing list