[Olpc-open] [sugar] G1G1 Pre-installed Activities Request for Help Testing
Sameer Verma
sverma at sfsu.edu
Thu Sep 25 20:38:10 EDT 2008
On Thu, Sep 25, 2008 at 5:31 PM, Yoshiki Ohshima <yoshiki at vpri.org> wrote:
> At Thu, 25 Sep 2008 14:37:09 -0500,
> Sameer Verma wrote:
>>
>> On Thu, Sep 25, 2008 at 12:39 PM, Yoshiki Ohshima <yoshiki at vpri.org> wrote:
>> >> > BTW, the spreadsheet is at
>> >> > http://spreadsheets.google.com/ccc?key=p_Xhb6KcXLyEViA50CnCaDg&hl=en
>> >>
>> >> So by that metric, Terminal is the best activity. Huh?
>> >
>> > Yeah. Do these numbers mean anything? What is the point of
>> > averaging unrelated numbers? Averaging lines of code score and
>> > usability score almost looks like an idea of an innumerate.
>> >
>> The numbers are just fillers. They don't mean anything. The idea is
>> for you guys to fill in numbers based on a metric and not because its
>> popular on the list. Feel free to edit as needed.
>
> I still don't get it... Even if people edit the spreadsheet "as
> needed", at what point is it going to start "making sense"? The
> question is whether things can be put on one dimentional axis in this
> way. As you also know, using numbers doesn't necessarily make it
> "unbiased".
>
> -- Yoshiki
> _______________________________________________
> Sugar mailing list
> Sugar at lists.laptop.org
> http://lists.laptop.org/listinfo/sugar
>
Hi Yoshiki,
So, let's see this from the way it first started. There was a call for
for a list of favorites, and the list came in. Everyone has their
favorite list. If I say "I want terminal" that's a binary decision.
Yes/No. If I say "Terminal is Cool/OK/Sucks" it has three levels
inputs. Why I say "Cool" is of course based on my own intuitive
assessment, but its not explicit.
Instead, if we say the qualification of Terminal is based on
attributes such as Epistemiologocal impact, quality, etc. now we have
something more explanatory for "Cool". Additionally, by factoring in
weights for each item, we can say that they are not all equally
important. Epistemological impact is the most important, so we assign
it 25%.
By scoring each activity on 9 attributes, we spread the bias across
the attributes. The weighted score of an activity is its attribute
score factored by its weight.
As for the biased/unbiased part, yes, you can score all attributes at
10 and get the max possible score, but we are all doing this for a
reason, so I assume we'll all be prudent about scoring. Additionally,
the scale of 1 to 10 for each attribute provides more variation than a
binary yes/no type answer. In the end it all depends on how subjective
your scoring was. For example, I really don't know how to rate "Fun"
for terminal on a scale of 1 to 10. But, maybe we can collectively say
that Terminal scores 7 for "Fun". Now, if Fun wasn't weighted highly
for G1G1, it wouldn't make much of a difference anyway. In fact, if we
had time, you could also use Monte Carlo
(http://en.wikipedia.org/wiki/Monte_Carlo_method) to improve the
inputs. Most spreadsheets can do this easily.
The weighted scoring approach isn't new. Its used quite commonly in
many multi-criteria assessment situations. Given that we are on a time
crunch, this may not be the way to go. Maybe flip the coin and be done
with it :-)
Hopefully next time we can spend more time on a more multi-criteria
approach instead of "Cool".
cheers,
Sameer
--
Dr. Sameer Verma, Ph.D.
Associate Professor of Information Systems
San Francisco State University
San Francisco CA 94132 USA
http://verma.sfsu.edu/
http://opensource.sfsu.edu/
More information about the Olpc-open
mailing list