Early Ariadne Experiences
Our experiences over the last decade in the ARIADNE Foundation (www.ariadne-eu.org/) illustrate how seemingly sensible strategies used to deal with quality are naïve and misguided in the context of a large-scale repository. We originally intended to impose a quality policy that would restrict the introduction of learning objects into our repository. After long and difficult discussions, we had to recognize that there was no consensus on precise and usable criteria that would determine quality. One of the important reasons why such consensus could not be reached was the severe cultural difference in how such quality was viewed in, for instance, Latin and Nordic European countries. We then compromised for quality review of the metadata about the learning objects, rather than reviewing the objects themselves. Initially, only the objects whose metadata were reviewed were made available to end users. However, this created a serious bottleneck, as it would sometimes take a long time before the new content would actually be reviewed. In the end, we opted for a scheme wherein the ranking of a learning object in a list of search results could be influenced by, among other things, the fact that its metadata had been reviewed. Reflecting on this evolution with the benefit of hindsight, there is a deeper trend here: rather than thinking about quality in a binary way, where resources either do or do not qualify to quality criteria, we should reconsider this notion as one that is multi-faceted and that influences the ranking rather than the inclusion of an object in a particular context.
The problem with Quality
The fact that quality is context-dependent complicates its automated processing: it suggests that algorithms need to take into account a wide variety of characteristics for a realistic determination of quality. Worse, the subjective nature of quality suggests that such automated processing may be intrinsically impossible: how can an algorithm capture the highly personal preferences and characteristics that determine the experience of an individual? I believe that the complication described above leads to an overly strong focus in the “learning quality world” on less relevant quality aspects that can be measured and processed, like process characteristics or simple learning object properties. Whereas it is clear that these aspects are relevant to some extent, they avoid the real problem of trying to measure quality in context. I believe that a different approach to quality is needed altogether, and will refer to it here as a “LearnRank”.
As long as the web was relatively small, the main selling point of search engines was how much material they indexed and how efficient they were in processing a query. However, as the web moved past the “tipping point” and the number of results to a typical query rose from tens to hundreds to many thousands and even millions, ranking the results in a meaningful way became more and more important. Indeed, Google’s success is often attributed to its PageRank algorithm, used to rank search results. This algorithm is only to a minimal level based on keywords, the number of times they occur and the location in the document where they occur. Much more innovative and crucial is the use in PageRank of incoming links to determine the relevancy of a particular document. In this way, PageRank exploits the human activities of all web authors who decide to link to pages that they consider relevant. Note that this sort of algorithm does not require any librarian type of effort, but rather is integrated into the very act of authoring material in the first place: this is important, as “librarian metadata don’t scale” (Duval & Hodgins, 2004b; Weibel, 2005)!
Towards the development of LearnRank
If we apply the basic idea behind PageRank to learning, then the “LearnRank” of a resource should indicate how useful people have found this object for their learning. And, as with PageRank, we would need to be able to determine this without asking the learner, author or librarians to provide additional metadata about the object in question.
Objects that have been used in many contexts, or, more importantly, in many contexts that are relevant to a specific learner, should have a higher LearnRank for that learner. Even though we are optimistic that progress is being made on gathering empirical data on learning effect, there are alternatives available already now to help bootstrap the LearnRank algorithm.
— Imagine that 10% of professors in Human- Computer Interaction in the French language for undergraduate students start using a particular tool with their students. Is that not a strong indication that this tool has a high “quality” in that context? What about 20%? Or 80%?
— Suppose that we track (as we can!) the correlation between the objects that learners work with and their performance on a post-test that assesses whether they have actually mastered a specific law of thermodynamics. Would that correlation not give a good indication of “quality”?
Of course, there is much work to be done on elaborating and evaluating the fine details of the LearnRank idea, but the principle should be clear from what has been presented above. Now that the technical standards are finally in place to enable the development of an open infrastructure for learning, we can finally achieve the scale that is necessary to get to the numbers where the idiosyncrasies of one learner or teacher or context will no longer skew everybody’s results.
If Content is King, then Context is Queen
Moreover, just like Google and other search engines rely on search terms (and more and more on context, such as location, past search histories, etc.!) to constrain the web graph to the portion that is relevant to the user at that moment, learning applications can constrain the search space to only those resources that are relevant to the learner. Most interestingly, in a learning context, we can actually go much further than web search engines, because we can exploit much richer metadata about the user, his context and the learning object:
— Learning objects can be described with much richer (learning object) metadata than is typically the case for arbitrary web pages. Moreover, these metadata can automatically be generated, taking into account the learning context (Cardinaels et al., 2005). As a consequence, we can restrict the search space of the user to really relevant learning objects, with the appropriate technical characteristics, suitable for his budget, accommodating his learning approach, in a language that he can learn in, etc.
— As learning objects are typically deployed in Learning Management Systems that provide explicit context, LearnRank can also rely on this information.
— Using attention metadata, we can track what a user actually does with a learning object, beyond simple downloading or accessing it (Najjar, 2005). The potential here is huge as it allows us to create a “usage trail” of a learning object, that will eventually enable us to deduce metrics that indicate how good an enabler for learning a learning object is in a particular context.
Now that we have the open standards in place to build a large scale infrastructure for learning, we can start focusing on quality, much in the same way that web search engines shifted focus from being exhaustive to providing relevant results. With this paper, I’d like to call for more focus on the development of good LearnRank measures, rather than on only indirectly and partially relevant indicators of quality for learning.
1 This is a condensed version of a chapter in the European Quality Handbook, to be published by CEDEFOP in February 2006.
2 We will use the term “learning object” in a very loose sense here, referring to any kind of resource that is useful and relevant
(Cardinaels et al., 2005) Kris Cardinaels, Michael Meire, Erik Duval, Automating Metadata Generation: the Simple Indexing Interface, International World Wide Web Conference Committee, WWW 2005, May 10-14, 2005, Chiba, Japan, see also http://ariadne.cs.kuleuven.ac.be/amg/publications.php
(Duval & Hodgins, 2004) E. Duval, W. Hodgins. Metadata Matters, International Conference on Dublin Core and Metadata Applications, Shanghai, China, 11-14 October 2004. See also http://ariadne.cs.kuleuven.ac.be:8989/mt/blogs/ErikLog/archives/000566.html
(Duval & Hodgins, 2004b) Duval, Erik and Wayne Hodgins, Making metadata go away: Hiding everything but the benefits, Keynote address at DC-2004, Shanghai, China, October 2004; see also http://students.washington.edu/jtennis/dcconf/Paper_15.pdf
(Najjar, 2005) Jehad Najjar, Michael Meire and Erik Duval, Attention Metadata Management: Tracking the Use of Learning Objects through Attention.XML, ED-MEDIA 2005 World Conference on Educational Multimedia, Hypermedia and Telecommunications, Montréal, Canada, June 27-July 2, 2005; see also http://www.cs.kuleuven.ac.be/~najjar/
(Rehak, 2004) Dan Rehak, Good&plenty, Googlezon, your grandmother and Nike: challenges for ubiquitous learning & learning technology, 2004; see also http://www.lsal.cmu.edu/lsal/expertise/papers/conference/pgl2004/googlezon20041001.pdf
(Weibel, 2005) Stuart L. Weibel, Border Crossings – Reflections on a Decade of Metadata Consensus Building, D-Lib Magazine, July/August 2005, Volume 11 Number 7/8, see also
Last changed: Wednesday, 07 December 2005