Voodoo categorisation and dynamic ontologies in the world of OER

Voodoo masks

Introduction

In a previous post on this blog, I described how we’re planning for search to work in MoodleNet. In this post, I want to dig into tagging and categorisation – which, it turns out, is an unexpectedly philosophical subject. Fundamentally, it comes down to whether you think that subjects such as ‘History’ and ‘Biology’ are real things that exist out there in the world, or whether you think that these are just labels that humans use to make sense of our experiences.

What follows is an attempt to explain why Open Educational Resources (OER) repositories are often under-used, how some forms of categorisation are essentially an attempt at witchcraft, and why assuming user intent can be problematic. Let’s start, however, with everyone’s favourite video streaming service.

Netflix

If you asked me what films and documentaries I like, I’d be able to use broad brushstrokes to paint you a picture. I know what I like and what I don’t like. Despite this, I’ve never intentionally sat down to watch ‘Critically-acclaimed Cerebral Independent Movies’ (Netflix code: 89), nor ‘Understated Social & Cultural Documentaries’ (Netflix code: 2428) nor even ‘Witty Independent Movies based on Books’ (Netflix code: 4913). These overlapping categories belong to a classification system developed by Netflix that now stretches into the tens of thousands of categories.

Netflix screenshot
Screenshot of Netflix user interface

Netflix is popular because the content it provides is constantly updating, but mainly because it gets to know you over time. So instead of presenting the user with a list of 27,000 categories and asking them to choose, Netflix starts from a basis of the user picking three movies they like, and then making recommendations based on what they actually watch.

There aren’t a lot of actions that users can perform in the main Netflix interface: it’s essentially ‘browse’, ‘add to list’ and ‘play’. In addition, users don’t get to categorise what they watch. That categorisation is instead performed through a combination of Netflix’s algorithm and their employees, which work to create a personalised recommendation ‘layer’ on top of all of the content available.

In other words, Netflix’s categorisation is done to the user rather than by the user. Netflix may have thousands of categories and update them regularly, but the only way users can influence these is passively through consuming content, rather than actively – for example through tagging. More formally, we might say that Netflix is in complete control of the ontology of its ecosystem.

Voodoo categorisation

In a talk given back in 2005, media theorist Clay Shirky railed against what he called ‘voodoo categorisation’. This, he explained, is an attempt to create a model that perfectly describes the world. The ‘voodoo’ comes when you then try to act on that model and expect things to change that world.

Voodoo dolls
Image CC BY Siaron James

Shirky explains that, when organisations try to force ‘voodoo categorisation’ (or any form of top-down ontology) onto large user bases, two significant problems occur:

  1. Signal loss – this happens when organisations assume that two things are the same (e.g. ‘Bolshevik revolution’ and ‘Russian revolution’) and therefore should be grouped together. After all, they don’t want users to miss out on potentially-relevant content. However, by grouping them together, they are over-estimating the signal loss in the expansion (i.e. by treating them as different) and under-estimating the signal loss in the collapse (i.e. by treating them as the same).
  2. Unstable categories – organisations assume that the categories within their ontology will persist over time. However, if we expand our timescale, every category is unstable. For example, ‘country’ might be seen as a useful category, but it’s been almost thirty years since we’ve recognised East Germany or Yugoslavia.

The ontologies we use to understand the world are coloured by our language, politics, and assumptions. For example, if we are creating a category of every country, do we include Palestine? What about Taiwan? These aren’t neutral choices and there is not necessarily a ‘correct’ answer now and for all time. As Shirky points out, it follows that someone tagging an item ‘to_read’ is no better in any objective way than conforming to a pre-defined categorisation scheme.

This is all well and good theoretically, but let’s bring things back down to earth and talk very practically about MoodleNet. How are we going to ensure that users can find things relevant to what they are teaching? Let’s have a look at OER repositories and the type of categories they use to organise content.

OER repositories

The Open Education Consortium points prospective users of OER to the website of the Community College Consortium for Open Educational Resources. They have a list of useful repositories, from which I’ve chosen three popular examples, highlighting their categories:

This slideshow requires JavaScript.

These repositories act a lot like libraries. There are a small number of pre-determined subject areas into which resources can be placed. In many ways, it’s as if there’s limited ‘shelf space’. What would Netflix do in this situation? After all, if they can come up with 27,000 categories for films and TV, how many more would there be for educational resources?

Ultimately, there are at least three problems with OER repositories organised by pre-determined subject areas:

  • Users have to fit in with an imposed ontology
  • Users have to know what they are looking for in advance
  • Users aren’t provided with any context in which the resource may be used

We are trying to rectify these problems in MoodleNet, by tying together individual motivation with group value. Teachers look for resources which have been explicitly categorised as relevant to the curriculum they are teaching. Given the chance, great teachers also look for ideas in a wide range of places, some of which may be seen as coming from other disciplines. MoodleNet then allows them to provide the context in which they would use the resource when sharing their findings with the community.

Dynamic ontologies in MoodleNet

Our research has shown that, as you would expect, educators exhibit differences in the way they approach finding educational resources. While there are those that go straight to the appropriate category and browse from there, equally there are many who prefer a ‘search first and filter later’ approach. We want to accommodate the needs of both.

The solution we are proposing to use with MoodleNet includes both taxonomy and folksonomy. That is to say, it involves both top-down categorisation and bottom-up tagging. Instead of coming up with a bespoke taxonomy we are thinking of using UNESCO’s International Standard Classification of Education (ISCED) fields of education and training which provides a three-level hierarchy, complete with relevant codes:

UNESCO ISCED codes
Example of some of the UNESCO ISCED fields of education and training

MoodleNet will require three types of taxonomic data to be added to communities, collections, and user profiles:

  • Subject area (ISCED broad, narrow, or detailed)
  • Grade level (broadly defined – e.g. ‘primary’ or ‘undergraduate’)
  • Language(s)

In addition, users may choose to add folksonomic data (i.e. free-text tagging) to further contextualise communities, collections, and profiles. That would mean a collection of resources might look something like this:

Mockup of what tags could look like in a MoodleNet collection
Mockup of taxonomic and folksonomic tagging system in a MoodleNet collection

The way Clay Shirky explains this approach is that “the semantics are in the users, not in the system”. In other words, the system doesn’t have to understand that the Bolshevik Revolution is a subset of 20th century Russian history. It just needs to point out that people who often tag things with ‘Lenin’ also tag things with ‘Bolshevik’. It’s up to the teacher to make the professional judgement as to the value of a resource.

We envisage that this combination of taxonomic and folksonomic tagging will lead to a dynamic ontology in MoodleNet, powered by its users. It should allow a range of uses, by different types of educators, who have varying beliefs about the world.

Conclusion

What we’re describing here is not an easy problem to solve. The MoodleNet team does not profess to have fixed issues that have beset those organising educational for the past few decades. What we do recognise, however, is the power of the web and the value of context. As a result, MoodleNet should be useful to teachers who are looking to find resources directly relevant to the curriculum they are teaching. It should also be useful to those teachers looking to cast the net more widely

In closing, we are trying to keep MoodleNet as flexible as possible. Just as Moodle Core can be used in a wide variety of situations and pedagogical purposes, so we envisage MoodleNet to be used for equally diverse purposes.

MoodleNet v0.9.1 alpha update

This slideshow requires JavaScript.

Yesterday, we released MoodleNet v0.9.1 alpha. As we were tweaking the previous version right up to the workshop at the UK & Ireland MoodleMoot last week, this release is primarily bug fixes and small tweaks.

The most noticeable difference is on the home page for logged-in users, where featured collections and featured communities are displayed prominently. Right now, these are hard-coded, but in future will be both under the control of the federated instance administrator.

Dark mode continues to be the team’s favourite, but we’ve also tweaked the light mode to be more accessible! We’ve scheduled the next update, v0.9.2 alpha, for Tuesday 7th May.

What we learned by running a workshop at #MootIEUK19

Last week was MoodleMoot UK & Ireland 2019. At the previous year’s MoodleMoot, our presentation on MoodleNet contained only ideas of what we would build. This year, we had an alpha version to put in front of people at a workshop.

The focus of the session was on past, present, and future, with participants having an opportunity to discuss what they like about the MoodleNet vision, and what they’d like to see included in the future roadmap.

The 1.5 hour session on Day 3 of the Moot was structured in the following way:

  • Welcome, intro and overview
  • Affinity grouping
  • Hands-on testing of MoodleNet
  • Discussion around key questions
  • Feedback and next steps
  • CLOSE

During the Affinity grouping activity, and before participants had a chance to register for MoodleNet, they were asked what problems they envisaged MoodleNet solving for them. The emergent groups were around:

  • Online course design
  • Institutional use
  • Learning and teaching
  • Technology
  • UX

During testing, participants were asked to register, complete a basic profile, and join a community to add resources and comments.

After testing, participants discussed a series of questions which are included with full details of the workshop on this wiki page. Over and above the things on our roadmap, the main things we learned (or found interesting) were:

  1. Confusion between ‘communities’ and ‘collections’
  2. Flagging duplicate content
  3. Per-community hierarchical tags
  4. Grouping of several communities
  5. Reward and recognition for users

It was a very useful session, and the 1.5 hours went by very quickly. We’d like to thank participants, and your rare badge will be on its way soon!

MoodleNet v0.9 alpha update

After some extended testing, we’ve just released MoodleNet v0.9 alpha in preparation for the workshop at this week’s UK & Ireland MoodleMoot.

Functionality added

  • Dark mode
  • User’s latest activities on profiles
  • Timeline (latest activities of user’s followed communities and collections) on home page

UI tweaks

  • New UI (without sidebar)
  • Moodle brand and colour scheme
  • Link to code of conduct from user menu

Bug fixes

  • Various small bug fixes

In addition to the above, we’re still working on search, federation (the ability to have separate instances of MoodleNet that can communicate with one another), and of course Moodle Core integration.

We’ve almost completed our Data Protection Impact Assessment (DPIA) with Moodle’s Privacy Officer and external Data Protection Officer (DPO). We’ll be sharing that with the community for feedback.

The first first release that features federation will be called ‘beta’, so the next scheduled release will be MoodleNet v0.9.1 alpha.

Making search a delightful experience in MoodleNet

MoodleNet is a new open social media platform for educators, focussed on professional development and open content. It is an integral part of the Moodle ecosystem and the wider landscape of Open Educational Resources (OERs). The purpose of this post is to explain how our approach to search will help with this.

Our research shows that educators discover resources in two key ways, which we’re bringing together with MoodleNet.

Proactive/Reactive

In order to be proactive and search for something specific, you have to know what you are looking for. That’s why it’s common for educators to also be reactive, discovering resources and other useful information as a result of their social and professional networks.

From its inception, we’ve designed MoodleNet as a place that works like the web. In other words, it harnesses the collective power of networks while at the same time allowing the intimacy of human relationships. However, search tends to be a transactional experience. How do we make it more ‘social’?

Seung (persona)At this point, let’s re-introduce Seung, the 26 year-old Learning Technologist from Australia who we first met in a white paper from early 2018. She’s looking to help her colleagues use Moodle more effectively, and to connect with other Learning Technologists to discover promising practices.

Seung comes across many potentially-useful resources on her travels around the web, which she curates using services such as Pocket, Evernote, and the ‘favourite/like’ functionality on social networks such as Twitter and Facebook. When Seung uses MoodleNet, she joins relevant communities, follows interesting collections and people, and ‘likes’ resources that either she or her colleagues could use.

One of the problems Seung has is re-discovering resources that she’s previously found. Although she considers herself an advanced user of search engines such as Google and DuckDuckGo, Seung is sometimes frustrated that it can take a while to unearth a resource that she had meant to come back to later.

MoodleNet search overview (Bryan Mathers)MoodleNet’s powerful search functionality will allow Seung to both find interesting communities, collections, and profiles, and quickly rediscover resources on MoodleNet that she has marked as potentially-useful. In addition, because MoodleNet is focused on open content, Seung can extend her search to OER repositories and the open web.

The same search functionality will be available through a Moodle Core plugin that allows any user, whether or not they have an account on MoodleNet, to search for resources they would like to pull into their Moodle course. This plugin will also automatically add metadata about the original source location, the MoodleNet collection of which it was part, as well as any licensing information.

We’ve already started conversations with Europeana and Creative Commons about allowing MoodleNet users to directly search the resources they both index. We would also like to explore relationships with other OER repositories who would welcome MoodleNet communities curating and using their openly-licensed resources.  

In closing, we should mention that we have big plans for tags across MoodleNet, involving both taxonomic and folksonomic tagging, and provided by both users and machine learning. More details on that soon.

For now, the MoodleNet team would be interested in any questions or suggestions you have about this approach to search. What do you think? What else would you like to see?


Illustrations CC BY-ND Bryan Mathers

MoodleNet and the European Copyright Directive

European flag

On Tuesday, the European Parliament gave final approval to the Copyright Directive, a controversial piece of legislation affecting online services that either link to news articles or allow uploads.

During the process of this legislation coming into law, the MoodleNet team has been asked about the potential impact on what we are building. Developments are ongoing even now and the Directive has to passed into law by the European member states.

As a result, we have decided to keep a wiki page up-to-date about what the Copyright Directive may mean for MoodleNet. You can access this on the Moodle wiki.

MoodleNet v0.7 alpha update

MoodleNet v0.7 alpha login page

A couple of days ago the team deployed MoodleNet v0.7 alpha, which includes the following new functionality, UI tweaks, and bug fixes.

Functionality added

  • Timeline views
  • User profile pages

UI tweaks

  • New discussions view
  • Improved login page

Bug fixes

  • Fixed ‘a few seconds ago’ bug
  • Various small bug fixes

Note that the timeline views aren’t exactly as we want them, so we’re tweaking them over the next week or so.

In addition, we’re working on our Data Protection Impact Assessment (DPIA) with Moodle’s Privacy Officer and external Data Protection Officer (DPO). That will be finalised before we launch the first beta next month.

We’re currently working on:

  1. Federation — the ability to have separate instances of MoodleNet that can communicate with one another.
  2. Moodle Core integration — add a resource from MoodleNet to a course in a Moodle course.
  3. Refactoring and code clean-up — ensuring MoodleNet runs as quickly, efficiently, and bug-free as possible!

The next release, v0.9 alpha, is scheduled for week beginning 8th April 2019.

What we learned from testing MoodleNet’s value proposition

woman-looking-up

We’ve recently finished testing MoodleNet’s value proposition with two cohorts of users, in both English and Spanish. During each three-week testing period, we sent one survey per week. In this post, we’d like to share some of the insights we’ve gleaned.

It’s important to note the following:

  1. We built the smallest possible version of MoodleNet in an attempt to answer the question, “Do educators want to join communities to curate collections of resources?”
  2. During the testing process, we didn’t discuss future functionality in the user interface or in the emails we sent users. We did, however, discuss the roadmap in a tool called Changemap which we’re using to collect and discuss feedback and feature requests.
  3. One of the key features of MoodleNet will be federation (i.e. the ability to have separate instances of MoodleNet that can communicate with one another). This will change the user experience and utility of MoodleNet in significant ways.

The survey data we’ve collected suggests that MoodleNet is indeed something that can sustainably empower communities of educators to share and learn from each other to improve the quality of education.

What follows are three things that we’ve learned from the testing process.

1. We’ve validated the value proposition

This slideshow requires JavaScript.

A couple of days after giving each cohort of testers access to MoodleNet, we asked them, “Do you see yourself using something like MoodleNet to curate collections of resources?”. The functionality, especially during that first week for the initial cohort was extremely basic, and the experience sometimes buggy.

Despite this, by the time the second cohort filled in their first survey, it was clear that almost two-thirds of testers agreed that, yes, MoodleNet would be something that they would use.

2. The best tagline for MoodleNet: ‘Share. Curate. Discuss’

This slideshow requires JavaScript.

During the testing period we learned that creating taglines that are translatable and impactful in different languages is no easy feat. In fact, many companies and brands simply use English taglines, such as Nike’s ‘Just Do It’. We’ve decided to go ahead and use ‘Share. Curate. Discuss’ for the moment as the tagline for MoodleNet (including on the Spanish version of MoodleNet).

3. Testers are clear on what they want to see next

Through free text boxes in surveys, and from the information coming in via Changemap, it’s clear that users want to be able to:

  1. Search for specific keywords and topics of interest.
  2. Easily find out when something has changed within a community they’ve joined, or a collection they’re following.
  3. Sort lists of communities and collections by more than ‘most recent’ (e.g. by number of collections or discussion threads)
  4. Tag communities, collections, and profiles, to make it easier to find related content.
  5. Upload resources to MoodleNet instead of just adding via URL.
  6. Indicate ‘resource type’ (e.g. ‘course’, ‘presentation’ or ‘plugin’)
  7. Send resources they discover on MoodleNet to their Moodle Core instance
  8. Add copyright information to resources and collections
  9. Easily rediscover useful resources they’ve discovered in collections they’re not following
  10. Access MoodleNet on their mobile devices

Happily, we’ve already got MoodleNet working on mobile devices, although we’re still having some issues with Safari on both iOS and MacOS. We’re also launching ‘timeline views’ for communities and collections this week which will allow users to see what’s changed since they’ve been away.

As for the rest of the suggestions, we’re working on them! The most user-friendly way to see progress is via Changemap at: https://changemap.co/moodle/moodlenet

Conclusion

When developing software products, it’s easy to come up with a plan and start working on it without validating what you’re doing with users. We’ve still got a way to go before MoodleNet is exactly what community participants want from it, but we feel that in this initial testing period we’ve got a mandate to keep on iterating.

A big thank you to our two cohorts of testers, who have provided invaluable feedback. They still have access to MoodleNet beyond the testing period. We’ll be inviting more people to join at next month’s UK & Ireland MoodleMoot in Manchester, so why not join us there?

What we talk about when we talk about ‘rating systems’

stars

Context

I’m MoodleNet Lead and, since the project’s inception, I’ve had lots of conversations with many different people. Once they’ve grasped that MoodleNet is a federated resource-centric social network for educators, some of them ask a variation of this question: Oh, I assume you’ll be using a star rating system to ensure quality content?

They are often surprised when I explain that no, that’s not the plan at all. I haven’t written down why I’m opposed to star rating systems for educational content, so what follows should hopefully serve as a reference I can point people towards next time the issue crops up!

However, this is not meant as my last word on the subject, but rather a conversation-starter. What do you think about the approach I outline below?

Introduction

Wikipedia defines a rating system as “any kind of rating applied to a certain application domain”. Examples include:

  • Motion Picture Association of America (MPAA) film rating system
  • Star rating
  • Rating system of the Royal Navy

A rating system therefore explains how relevant something is in a particular context.

Ratings in context

Let’s take the example of film ratings. Thanks to the MPAA film rating system, parents can decide whether to allow their child to watch a particular film. Standardised criteria (e.g. drugs / sex / violence) are applied to a film which is then given a rating such as G (General Audiences), PG (Parental Guidance), and R (Restricted). These ratings are reviewed on a regular basis, sometimes leading to the introduction of new categories (e.g. PG-13).

Despite the MPAA film rating system, many parents seek additional guidance in this area – for example, websites such as Common Sense Media which further contextualise the film.

Common Sense Media screenshot
Screenshot showing the film ‘How to Train Your Dragon’ on the Common Sense Media website

In other words, the MPAA rating system isn’t enough. Parents also take into account what their child is like, what other parents do, and the recommendations of sites they trust such as Common Sense Media.

Three types of rating systems

As evident in the screenshot above, Common Sense Media includes many data points to help parents make a judgement as to whether they will allow their child to watch a film.

With MoodleNet, we want to help educators find high-quality, relevant resources for use in their particular context. Solving this problem is a subset of the perennial problem around the conservation of attention.

Educational resources triangle
Project management triangle, adapted for educational resources

In other words, we want to provide the shortest path to the best resources. Using an adapted project management triangle, educators usually have to make do with two of the three of time, effort, and quality. That is to say they can minimise the time and cost of looking for resources, but this is likely to have a hit on the relevance of resources they discover (which is a proxy for quality).

Likewise, if educators want to minimise the time and maximise the quality of resources, that will cost them more. Finally, if they want to minimise the cost and maximise the quality, they will have to spend a lot more time finding resources.

The ‘holy grail’ would be a system that minimises time and cost at the same time as delivering quality education resources. With MoodleNet, we are attempting to do that in part by providing a system that is part searchable resource repository, and part discovery-based social network.

Proactive/Reactive
Diagram by Bryan Mathers showing MoodleNet as both a place where educators can search for and discover educational resources

Simply providing a place for educators to search and discover resources is not enough, however. We need something more granular than a mashup of a search engine and status updates.

What kinds of rating systems are used on the web?

There are many kinds of rating systems used on the web, from informal approaches using emoji, through to formal approaches using very strict rubrics. What we need with MoodleNet is something that allows for some flexibility, an approach that assumes some context.

With that in mind, let’s consider three different kinds of rating systems:

  1. Star rating systems
  2. Best answer systems
  3. Like-based systems

1. Star rating systems

One of the indicators in the previous example of the Common Sense Media website is a five-star rating system. This is a commonly-used approach, with perhaps the best-known example being Amazon product reviews. Here is an example:

Amazon page for Google Pixelbook
Amazon product page for a Google Pixelbook showing an average of 3.5 stars out of five from 12 customer reviews

Should I buy this laptop? I have the opinion of 12 customers, with a rating of three-and-a-half stars out of five, but I’m not sure. Let’s look at the reviews. Here’s the top one, marked as ‘helpful’ by nine people:

One-star review for Google Pixelbook
One-star review from a customer complaining about faulty Google Pixelbook

So this reviewer left a one-star review after being sent a faulty unit by a third-party seller. That, of course, is a statement about the seller, not the product.

Meanwhile:

Five-star review for Google Pixelbook
Five-star review from a customer pleased with their Google Pixelbook

Averaging the rating of these two reviews obviously does not make sense, as they are not rating the same thing. The first reviewer is using the star rating system to complain, and the second reviewer seems to like the product, but we have no context. Is this their first ever laptop? What are they using it for?

Star rating systems are problematic as they are blunt instruments that attempt to boil down many different factors to a single, objective ‘rating’. They are also too easily gamed through methods such as ‘astroturfing’. This is when individuals or organisations with a vested interested organise for very positive or very negative reviews to be left about particular products, services, and resources.

From the Wikipedia article on the subject:

Data mining expert Bing Liu (University of Illinois) estimated that one-third of all consumer reviews on the Internet are fake. According to The New York Times, this has made it hard to tell the difference between “popular sentiment” and “manufactured public opinion.”

As a result, implementing a star rating system in MoodleNet, a global network for educators, would be fraught with difficulties. It assumes an objective, explicit context when no such context exists.

2. Best answer approach

This approach allows a community of people with similar interests to ask questions, receive answers, and have both voted upon. This format is common to Stack Overflow and Reddit.

Stackoverflow question and answer
Screenshot of a question with answers on Stack Overflow

Some of these question and answer pages on Stack Overflow become quite lengthy, with nested comments. In addition, some responders disagree with one another. As a result, and to save other people time, the original poster of the question can indicate that a particular answer solved their problem. This is then highlighted.

The ‘best answer’ approach works very well for knotty problems that require clarification and/or some collaborative thinking-through of problems. The result is then be easily searched and parsed by someone coming later with the same problem. I can imagine this would work well within MoodleNet community discussion forums (as it already does on the moodle.org forums).

When dealing with educational resources, however, there is often no objective ‘best answer’. There are things that work in a particular context, and things that don’t. Given how different classrooms can be even within the same institution, this is not something that can be easily solved by a ‘best answer’ approach.

3. Like-based systems

Sometimes simple mechanisms can be very powerful. The ‘like’ button has conquered social networks, with the best-known example being Facebook’s implementation.

Facebook Like button
Example of a Facebook ‘like’ button

I don’t use Facebook products on principle, and haven’t done since 2011, so let’s look at other implementations.

YouTube

Social networks are full of user-generated content. Take YouTube, for example, where 400 hours of video is uploaded every single minute. How can anyone possibly find anything of value with such a deluge of information?

YouTube search
YouTube search results for ‘bolshevik revolution’ sorted by relevance

In the above screenshot, you can see a search for one of my favourite topics, The Bolshevik Revolution. YouTube does a good job of surfacing ‘relevant’ content and I can also choose to sort my results by ‘rating’.

Here is the top video from the search result:

Annotated YouTube video
YouTube video with upvote and downvote functionality highlighted

I don’t have time to watch every video that might be relevant, so I need a shortcut. YouTube gives me statistics about how many people have viewed this video and how many people subscribe to this user’s channel. I can also see when the video was published. All of this is useful information.

The metric I’m most interested in, however, and which seems to make the biggest impact in terms of YouTube’s algorithm, is the number of upvotes the video has received compared to the number of downvotes. In this example, the video has received 16,000 upvotes and 634 downvotes, meaning that over 95% of people who have expressed an opinion in this way have been positive.

If I want more information, I can dive into the comments section, but I can already see that this video is likely to be something that may be of use to me. I would add this to a shortlist of three to five videos on the topic that I’d watch to discover the one that’s best for my context.

Twitter

Going one stage further, some social networks like Twitter simply offer the ability for users to ‘like’ something. A full explanation of the ‘retweet’ or ‘boost’ functionality of social networks is outside of the scope of this post, but that too serves as an indicator:

Tweet from UN Education Report
Tweet from UN Education Report showing reteweets and likes

This tweet from the UN about a report their Global Education Monitoring report has been liked 72 times. We don’t know the context of the people who have ‘liked’ this, but we can see that it’s popular. So, if I were searching for something about migrant education, I’d be sure to check out this report.

Although both YouTube and Twitter do not make it clear, their algorithms take into account ‘likes’ and ‘upvotes’ within the context of who you are connected to. So, for example, if a video has a lot of upvotes on YouTube and you’re subscribed to that channel, you’re likely to be recommended that video. Similarly, on Twitter, if a tweet has a lot of likes and a lot of those likes come from people you’re following, then the tweet is likely to be recommended to you.

Twitter user explaining likes are bookmarks, not endorsements
Twitter user account with bio that includes “Likes are usually bookmarks, not endorsements”

Interestingly, many Twitter users use the limited space in their bios to point out explicitly that their ‘likes’ are not endorsements, but used to bookmark things to which they’d like to return. In the past year, Twitter has begun to roll out bookmarks functionality, but it is a two-step process and not widely used.

So likes act as both votes and a form of bookmarking system. It’s a neat, elegant, and widely-used indicator.

What does this mean for MoodleNet?

So far, we have discovered that:

  • The ‘quality’ of a resource depends upon its (perceived) relevance
  • Relevant resources depend upon a user’s context
  • We cannot know everything about a user’s context

MoodleNet will implement a system of both taxonomic and folksonomic tagging. Taxonomic tags will include controlled tags relating to (i) language, (ii) broad subject area, and (iii) grade level(s). Folksonomic tags will be open for anyone to enter, and will autocomplete to help prevent typos. We are considering adding suggested tags via machine learning, too.

In addition to this, and based on what we’ve learned from the three rating systems above, MoodleNet users will soon be able to ‘like’ resources within collections.

Potential future location of 'like' button in MoodleNet
Screenshot of a MoodleNet collection with arrow indicating potential future location of ‘like’ button

By adding a ‘like’ button to resources within MoodleNet collections, we potentially solve a number of problems. This is particularly true if we indicate the number of times that resource has been liked by community members.

  1. Context – every collection is within a community, increasing the amount of context we have for each ‘like’.
  2. Bookmarking – ‘liking’ a resource within a collection will add it to a list of resources a user has liked across collections and communities.
  3. Popularity contest – collections are limited to 10 resources so, if we also indicate when a resource was added, we can see whether or not it should be replaced.

As discussions can happen both at the community and collection level, users can discuss collections and use the number of likes as an indicator.

Conclusion

Sometimes the best solutions are the simplest ones, and the ones that people are used to using. In our context, that looks like a simple ‘like’ button next to resources in the context of a collection within a community.

We’re going to test out this approach, and see what kind of behaviours emerge as a result. The plan is to iterate based on the feedback we receive and, of course, continue to tweak the user interface of MoodleNet as it grows!


What are your thoughts on this? Have you seen something that works well that we could use as well / instead of the above?

MoodleNet 0.5 alpha update

MoodleNet responsive view

This week, we are releasing MoodleNet v0.5 alpha, which includes one of our most-requested features: a mobile web view! We’ve also implemented a bunch of UI tweaks and bug fixes.

Note that, after testing using BrowserStack, pretty much every combination of mobile device and web browser works except Apple’s Safari and Microsoft’s Edge browsers. This is due to a combination of some issues around supporting web standards, unfortunately.

For the moment we suggest that the community use other, more standards-compliant browsers to access MoodleNet. Some excellent choices include OperaMozilla Firefox and Google Chrome.

We didn’t manage to sneak in an ‘activity’ view for this release, but we’re working on it this week. This will allow you to see everything that’s happened within a community recently (e.g. new user/resource/collection added, new discussion thread).