[Sugar-devel] The quest for data
Sameer Verma
sverma at sfsu.edu
Fri Jan 3 23:59:51 EST 2014
On Fri, Jan 3, 2014 at 2:23 PM, James Cameron <quozl at laptop.org> wrote:
> Metrics can direct action.
>
> Unfortunately, in the absence of meaningful metrics, the meaningless
> metrics will also direct action.
>
True. In fact, the reliability of the whole thing is dependent on the
reliability of the generated data. For instance, if the time stamp is
corrupt, then so will be the analysis, unless the data are treated for
that bias.
> One of the assertions inherent in OLPC is that merely using a device
> can have an effect on a brain, regardless of what activities are used.
Brain, perhaps. I'm leaning more on the learning side ;-)
>
> In the data listed, I haven't seen any use of more fundamental
> measurements like how long a device is used for. OLPC's builds
> have a power log. This captures time spent using a device.
True. Activities do not report end times, or whether the frequency
count is for the number of times a "new" activity was started, or if
it was simply a resumption of the previous instance. Walter had
indicated that thre is some movement in this direction to gather end
times. The sugar-stats system does record end times. We still have an
assumption (to be addressed by the researcher) that x number of
seconds actually lead to a delta of y in learning. Usually we
establish correlation, and support a case for causality with proxy
observations.
>
> It is especially relevant for a device that might also be used in
> Gnome rather than Sugar. Harvest seems to have arisen out of the
> availability of the Journal.
>
Yes, the methods that use the datastore as a source rely on the
Journal, but the sugar-stats system does not. I believe it collects in
GNOME as well.
The way I see it, there are four parts to this supply chain:
measurement, collection, analysis and report (see
http://www.educause.edu/ero/article/penetrating-fog-analytics-learning-and-education).
1) The data has to be generated at the source (Sugar activity or dbus)
and must be done with required granularity and reliability. So, for
instance, TurtleArt can record the type of blocks, or Maze can record
the number of turns. This will vary by activity. We also have to be
mindful of the reliability, for instance, of internal clock variation
for timestamps.
2) We need a way to collect data on an ongoing basis on the laptop.
This may be in the Journal datastore, or in the RRD file, as in the
case of sugar-stats. We then continue the collection either by
aggregating the data at the XS/XSCE and/or a central location (as with
the Harvest system) so that the data can be analyzed.
3) The analysis stage can be done with the raw data (basic statistics,
correlation, qualitative), or it can be aggregated (as with the
Jamaica CouchDB system doing basic stats) and made ready for
reporting. Some of this may be automated, but to go beyond "Powerpoint
pie charts", it's really on a case-by-case basis.
4) The reporting can be done either via visualization, and/or by
generating periodic reports. The reporting should be specific to the
person(s) looking at it. No magic there.
Now, of course, if the data at the source is corrupt, then it may
reflect in the report. There are ways to address missing data and
biases, but it would be better to have a reliable way to generate data
at the source.
> On the other hand, use of metrics tends towards standardised testing,
> with the ultimate implementation being an examination that must be
> completed each time before using a device for learning. Imagine
> having to delay learning!
How the data will be used remains to be seen. I have not seen it being
used in any of the projects that I know of. If others have seen/done
so, it would help to hear from them. I know that in conversations and
presentations to decision makers, the usual sore point is "can you
show us what you have so far?" For Jamaica, we have used a basic
exploratory approach on the Journal data, corroborated with structured
interviews with parents, teachers, etc. So, for instance, the data we
have shows a relatively large frequency of use of TuxMath (even with
different biases). However, we have qualitative evidence that supports
both usage of TuxMath and improvement in numeracy (standardized test).
We can support strong(er) correlation, but cannot really establish
causality. The three data points put together make for a compelling
case. As an aside, I did encounter a clever question in one of the
presentations: "What's constructivist about TuxMath?". That's a
discussion for another thread :-)
>
> I don't like the idea of standardised testing. I've seen the damage
> that it does. Sir Ken Robinson had a few things to say about that, in
> his talk Changing Education Paradigms.
>
It plays a role in the education-industrial complex, and it is
difficult to entirely walk way from it, but yes, YMMV.
cheers,
Sameer
> --
> James Cameron
> http://quozl.linux.org.au/
> _______________________________________________
> Sugar-devel mailing list
> Sugar-devel at lists.sugarlabs.org
> http://lists.sugarlabs.org/listinfo/sugar-devel
>
>
More information about the Devel
mailing list