Back in March of this year, we published a post entitled What we talk about when we talk about rating systems. While the fundamental approach that we outlined in that post remains unchanged, we’re tweaking the implementation of it for the beta launch in November.
We’re all familiar with the ability to ‘reshare’ and ‘like’ content on social networks. On Twitter it’s called ‘retweeting’ and ‘favouriting’, while on Mastodon (see screenshot below) it’s ‘boosting’ and ‘favouriting’
This approach is unproblematic for status updates, but of course with MoodleNet we’re also dealing with resources. As a result, to make things simple, the approach we outlined in our previous post was that there would be a single option (‘likes’) on resources.
This would serve not only to allow a user to easily re-find something they’d liked via their profile, but it would be a vote for the resource within the collection. Simple, straightforward, and effective!
The problem with this, as pointed out by Mayel (our Technical Architect) is that this isn’t in line with Fediverse conventions. We want to be interoperable with other federated social networks, and with those, ‘likes’ don’t show up in feeds, — but ‘boosts’ do.
This led us down somewhat of a rabbithole, as there are several ways we could fix this. One thing to bear in mind is that both ‘likes’ on comments and likes on resources should show up in the relevant section of a user’s profile.
As MDLNET-372 outlines, we initially considered three options:
Users have the option to ‘boost’ or ‘like’ comments, but only ‘like’ resources (which, behind the scenes is actually ‘like+boost’). Likes show up in the timeline on user profiles.
Users can ‘applaud’ (like+boost) both comments and resources. Applause shows up in the timeline on user profiles.
Users can ‘boost’ or ‘like’ comments, and ‘applaud’ (boost+like) resources. Both applause and likes show up in the timeline on user profiles.
Eventually, however, we rejected all of these options, because they would force users into an option where they have to both like and boost something, rather than perform these actions separately.
Finally, there’s terminology to inspect here as well. While Mastodon and Pleroma use stars and call them ‘favourites’, Twitter and Pixelfed use hearts and call them ‘likes’. Given that we want to use stars instead of hearts, we may as well stick with convention and call them ‘favourites’!
The solution to all of this doesn’t sound groundbreaking, but still involved a bit of thought: MoodleNet users will be able to ‘favourite’ and/or ‘boost’ both comments and resources. Favouriting something means it ends up on your profile, while a boost is both a way for users to give a thumbs-up to a resource and share it with their followers.
MoodleNet is a new open social media platform for educators, focussed on professional development and open content. It is an integral part of the Moodle ecosystem and the wider landscape of Open Educational Resources (OERs). The purpose of this post is to explain how our approach to search will help with this.
Our research shows that educators discover resources in two key ways, which we’re bringing together with MoodleNet.
In order to be proactive and search for something specific, you have to know what you are looking for. That’s why it’s common for educators to also be reactive, discovering resources and other useful information as a result of their social and professional networks.
From its inception, we’ve designed MoodleNet as a place that works like the web. In other words, it harnesses the collective power of networks while at the same time allowing the intimacy of human relationships. However, search tends to be a transactional experience. How do we make it more ‘social’?
At this point, let’s re-introduce Seung, the 26 year-old Learning Technologist from Australia who we first met in a white paper from early 2018. She’s looking to help her colleagues use Moodle more effectively, and to connect with other Learning Technologists to discover promising practices.
Seung comes across many potentially-useful resources on her travels around the web, which she curates using services such as Pocket, Evernote, and the ‘favourite/like’ functionality on social networks such as Twitter and Facebook. When Seung uses MoodleNet, she joins relevant communities, follows interesting collections and people, and ‘likes’ resources that either she or her colleagues could use.
One of the problems Seung has is re-discovering resources that she’s previously found. Although she considers herself an advanced user of search engines such as Google and DuckDuckGo, Seung is sometimes frustrated that it can take a while to unearth a resource that she had meant to come back to later.
MoodleNet’s powerful search functionality will allow Seung to both find interesting communities, collections, and profiles, and quickly rediscover resources on MoodleNet that she has marked as potentially-useful. In addition, because MoodleNet is focused on open content, Seung can extend her search to OER repositories and the open web.
The same search functionality will be available through a Moodle Core plugin that allows any user, whether or not they have an account on MoodleNet, to search for resources they would like to pull into their Moodle course. This plugin will also automatically add metadata about the original source location, the MoodleNet collection of which it was part, as well as any licensing information.
We’ve already started conversations with Europeana and Creative Commons about allowing MoodleNet users to directly search the resources they both index. We would also like to explore relationships with other OER repositories who would welcome MoodleNet communities curating and using their openly-licensed resources.
In closing, we should mention that we have big plans for tags across MoodleNet, involving both taxonomic and folksonomic tagging, and provided by both users and machine learning. More details on that soon.
For now, the MoodleNet team would be interested in any questions or suggestions you have about this approach to search. What do you think? What else would you like to see?
I’m MoodleNet Lead and, since the project’s inception, I’ve had lots of conversations with many different people. Once they’ve grasped that MoodleNet is a federated resource-centric social network for educators, some of them ask a variation of this question: Oh, I assume you’ll be using a star rating system to ensure quality content?
They are often surprised when I explain that no, that’s not the plan at all. I haven’t written down why I’m opposed to star rating systems for educational content, so what follows should hopefully serve as a reference I can point people towards next time the issue crops up!
However, this is not meant as my last word on the subject, but rather a conversation-starter. What do you think about the approach I outline below?
Wikipedia defines a rating system as “any kind of rating applied to a certain application domain”. Examples include:
Motion Picture Association of America (MPAA) film rating system
Rating system of the Royal Navy
A rating system therefore explains how relevant something is in a particular context.
Ratings in context
Let’s take the example of film ratings. Thanks to the MPAA film rating system, parents can decide whether to allow their child to watch a particular film. Standardised criteria (e.g. drugs / sex / violence) are applied to a film which is then given a rating such as G (General Audiences), PG (Parental Guidance), and R (Restricted). These ratings are reviewed on a regular basis, sometimes leading to the introduction of new categories (e.g. PG-13).
Despite the MPAA film rating system, many parents seek additional guidance in this area – for example, websites such as Common Sense Media which further contextualise the film.
In other words, the MPAA rating system isn’t enough. Parents also take into account what their child is like, what other parents do, and the recommendations of sites they trust such as Common Sense Media.
Three types of rating systems
As evident in the screenshot above, Common Sense Media includes many data points to help parents make a judgement as to whether they will allow their child to watch a film.
With MoodleNet, we want to help educators find high-quality, relevant resources for use in their particular context. Solving this problem is a subset of the perennial problem around the conservation of attention.
In other words, we want to provide the shortest path to the best resources. Using an adapted project management triangle, educators usually have to make do with two of the three of time, effort, and quality. That is to say they can minimise the time and cost of looking for resources, but this is likely to have a hit on the relevance of resources they discover (which is a proxy for quality).
Likewise, if educators want to minimise the time and maximise the quality of resources, that will cost them more. Finally, if they want to minimise the cost and maximise the quality, they will have to spend a lot more time finding resources.
The ‘holy grail’ would be a system that minimises time and cost at the same time as delivering quality education resources. With MoodleNet, we are attempting to do that in part by providing a system that is part searchable resource repository, and part discovery-based social network.
Simply providing a place for educators to search and discover resources is not enough, however. We need something more granular than a mashup of a search engine and status updates.
What kinds of rating systems are used on the web?
There are many kinds of rating systems used on the web, from informal approaches using emoji, through to formal approaches using very strict rubrics. What we need with MoodleNet is something that allows for some flexibility, an approach that assumes some context.
With that in mind, let’s consider three different kinds of rating systems:
Star rating systems
Best answer systems
1. Star rating systems
One of the indicators in the previous example of the Common Sense Media website is a five-star rating system. This is a commonly-used approach, with perhaps the best-known example being Amazon product reviews. Here is an example:
Should I buy this laptop? I have the opinion of 12 customers, with a rating of three-and-a-half stars out of five, but I’m not sure. Let’s look at the reviews. Here’s the top one, marked as ‘helpful’ by nine people:
So this reviewer left a one-star review after being sent a faulty unit by a third-party seller. That, of course, is a statement about the seller, not the product.
Averaging the rating of these two reviews obviously does not make sense, as they are not rating the same thing. The first reviewer is using the star rating system to complain, and the second reviewer seems to like the product, but we have no context. Is this their first ever laptop? What are they using it for?
Star rating systems are problematic as they are blunt instruments that attempt to boil down many different factors to a single, objective ‘rating’. They are also too easily gamed through methods such as ‘astroturfing’. This is when individuals or organisations with a vested interested organise for very positive or very negative reviews to be left about particular products, services, and resources.
Data mining expert Bing Liu (University of Illinois) estimated that one-third of all consumer reviews on the Internet are fake. According to The New York Times, this has made it hard to tell the difference between “popular sentiment” and “manufactured public opinion.”
As a result, implementing a star rating system in MoodleNet, a global network for educators, would be fraught with difficulties. It assumes an objective, explicit context when no such context exists.
2. Best answer approach
This approach allows a community of people with similar interests to ask questions, receive answers, and have both voted upon. This format is common to Stack Overflow and Reddit.
Some of these question and answer pages on Stack Overflow become quite lengthy, with nested comments. In addition, some responders disagree with one another. As a result, and to save other people time, the original poster of the question can indicate that a particular answer solved their problem. This is then highlighted.
The ‘best answer’ approach works very well for knotty problems that require clarification and/or some collaborative thinking-through of problems. The result is then be easily searched and parsed by someone coming later with the same problem. I can imagine this would work well within MoodleNet community discussion forums (as it already does on the moodle.org forums).
When dealing with educational resources, however, there is often no objective ‘best answer’. There are things that work in a particular context, and things that don’t. Given how different classrooms can be even within the same institution, this is not something that can be easily solved by a ‘best answer’ approach.
3. Like-based systems
Sometimes simple mechanisms can be very powerful. The ‘like’ button has conquered social networks, with the best-known example being Facebook’s implementation.
I don’t use Facebook products on principle, and haven’t done since 2011, so let’s look at other implementations.
Social networks are full of user-generated content. Take YouTube, for example, where 400 hours of video is uploaded every single minute. How can anyone possibly find anything of value with such a deluge of information?
In the above screenshot, you can see a search for one of my favourite topics, The Bolshevik Revolution. YouTube does a good job of surfacing ‘relevant’ content and I can also choose to sort my results by ‘rating’.
Here is the top video from the search result:
I don’t have time to watch every video that might be relevant, so I need a shortcut. YouTube gives me statistics about how many people have viewed this video and how many people subscribe to this user’s channel. I can also see when the video was published. All of this is useful information.
The metric I’m most interested in, however, and which seems to make the biggest impact in terms of YouTube’s algorithm, is the number of upvotes the video has received compared to the number of downvotes. In this example, the video has received 16,000 upvotes and 634 downvotes, meaning that over 95% of people who have expressed an opinion in this way have been positive.
If I want more information, I can dive into the comments section, but I can already see that this video is likely to be something that may be of use to me. I would add this to a shortlist of three to five videos on the topic that I’d watch to discover the one that’s best for my context.
Going one stage further, some social networks like Twitter simply offer the ability for users to ‘like’ something. A full explanation of the ‘retweet’ or ‘boost’ functionality of social networks is outside of the scope of this post, but that too serves as an indicator:
This tweet from the UN about a report their Global Education Monitoring report has been liked 72 times. We don’t know the context of the people who have ‘liked’ this, but we can see that it’s popular. So, if I were searching for something about migrant education, I’d be sure to check out this report.
Although both YouTube and Twitter do not make it clear, their algorithms take into account ‘likes’ and ‘upvotes’ within the context of who you are connected to. So, for example, if a video has a lot of upvotes on YouTube and you’re subscribed to that channel, you’re likely to be recommended that video. Similarly, on Twitter, if a tweet has a lot of likes and a lot of those likes come from people you’re following, then the tweet is likely to be recommended to you.
Interestingly, many Twitter users use the limited space in their bios to point out explicitly that their ‘likes’ are not endorsements, but used to bookmark things to which they’d like to return. In the past year, Twitter has begun to roll out bookmarks functionality, but it is a two-step process and not widely used.
So likes act as both votes and a form of bookmarking system. It’s a neat, elegant, and widely-used indicator.
What does this mean for MoodleNet?
So far, we have discovered that:
The ‘quality’ of a resource depends upon its (perceived) relevance
Relevant resources depend upon a user’s context
We cannot know everything about a user’s context
MoodleNet will implement a system of both taxonomic and folksonomic tagging. Taxonomic tags will include controlled tags relating to (i) language, (ii) broad subject area, and (iii) grade level(s). Folksonomic tags will be open for anyone to enter, and will autocomplete to help prevent typos. We are considering adding suggested tags via machine learning, too.
In addition to this, and based on what we’ve learned from the three rating systems above, MoodleNet users will soon be able to ‘like’ resources within collections.
By adding a ‘like’ button to resources within MoodleNet collections, we potentially solve a number of problems. This is particularly true if we indicate the number of times that resource has been liked by community members.
Context – every collection is within a community, increasing the amount of context we have for each ‘like’.
Bookmarking – ‘liking’ a resource within a collection will add it to a list of resources a user has liked across collections and communities.
Popularity contest – collections are limited to 10 resources so, if we also indicate when a resource was added, we can see whether or not it should be replaced.
As discussions can happen both at the community and collection level, users can discuss collections and use the number of likes as an indicator.
Sometimes the best solutions are the simplest ones, and the ones that people are used to using. In our context, that looks like a simple ‘like’ button next to resources in the context of a collection within a community.
We’re going to test out this approach, and see what kind of behaviours emerge as a result. The plan is to iterate based on the feedback we receive and, of course, continue to tweak the user interface of MoodleNet as it grows!
What are your thoughts on this? Have you seen something that works well that we could use as well / instead of the above?
Yesterday, we made our first major update to the version of MoodleNet currently undergoing initial testing. Not only did this update alter the look and feel of the interface, but it also added some useful new functionality and fixed some bugs reported by users via Changemap.
Last week, the MoodleNet team were in Barcelona at Moodle Spain HQ. Much of the work week involved the kind of discussion and implementation that can be difficult to write about, as it mainly involved hooking up the backend and front-end code.
Kayleigh and Sam from Outlandish joined us in the office on Thursday and Friday, which meant that we had an opportunity to reflect on the results of some testing they did with users about the sign-up process for MoodleNet. Their findings are below (or click here).
Based on user feedback, which is always different from what you expect, we’ve decided to take a different approach to the sign-up process. It became clear that there are users who want to get straight in and start using platforms straight away. These are the kind of users who will complete their profile later.
On the other hand, there are users that want to complete their profiles straight away, so that they have a full ‘presence’ on the platform and others can find out more about them.
Our proposed workflow, which will have a knock-on effect on other elements of the user interface, is below (or click here).
What are your thoughts on this? Note that we’re planning to implement a (skippable) user tour for first-time users of MoodleNet. We’ll also be writing a post soon that explains ‘Emoji ID’ and why it’s more than just a cute thing to have on your profile!
Image by José Alejandro Cuffia used under the terms of an open license