We’ve recently finished testing MoodleNet’s value proposition with two cohorts of users, in both English and Spanish. During each three-week testing period, we sent one survey per week. In this post, we’d like to share some of the insights we’ve gleaned.
It’s important to note the following:
We built the smallest possible version of MoodleNet in an attempt to answer the question, “Do educators want to join communities to curate collections of resources?”
During the testing process, we didn’t discuss future functionality in the user interface or in the emails we sent users. We did, however, discuss the roadmap in a tool called Changemap which we’re using to collect and discuss feedback and feature requests.
One of the key features of MoodleNet will be federation (i.e. the ability to have separate instances of MoodleNet that can communicate with one another). This will change the user experience and utility of MoodleNet in significant ways.
The survey data we’ve collected suggests that MoodleNet is indeed something that can sustainably empower communities of educators to share and learn from each other to improve the quality of education.
What follows are three things that we’ve learned from the testing process.
1. We’ve validated the value proposition
A couple of days after giving each cohort of testers access to MoodleNet, we asked them, “Do you see yourself using something like MoodleNet to curate collections of resources?”. The functionality, especially during that first week for the initial cohort was extremely basic, and the experience sometimes buggy.
Despite this, by the time the second cohort filled in their first survey, it was clear that almost two-thirds of testers agreed that, yes, MoodleNet would be something that they would use.
2. The best tagline for MoodleNet: ‘Share. Curate. Discuss’
During the testing period we learned that creating taglines that are translatable and impactful in different languages is no easy feat. In fact, many companies and brands simply use English taglines, such as Nike’s ‘Just Do It’. We’ve decided to go ahead and use ‘Share. Curate. Discuss’ for the moment as the tagline for MoodleNet (including on the Spanish version of MoodleNet).
3. Testers are clear on what they want to see next
Through free text boxes in surveys, and from the information coming in via Changemap, it’s clear that users want to be able to:
Search for specific keywords and topics of interest.
Easily find out when something has changed within a community they’ve joined, or a collection they’re following.
Sort lists of communities and collections by more than ‘most recent’ (e.g. by number of collections or discussion threads)
Tag communities, collections, and profiles, to make it easier to find related content.
Upload resources to MoodleNet instead of just adding via URL.
Indicate ‘resource type’ (e.g. ‘course’, ‘presentation’ or ‘plugin’)
Send resources they discover on MoodleNet to their Moodle Core instance
Add copyright information to resources and collections
Easily rediscover useful resources they’ve discovered in collections they’re not following
Access MoodleNet on their mobile devices
Happily, we’ve already got MoodleNet working on mobile devices, although we’re still having some issues with Safari on both iOS and MacOS. We’re also launching ‘timeline views’ for communities and collections this week which will allow users to see what’s changed since they’ve been away.
When developing software products, it’s easy to come up with a plan and start working on it without validating what you’re doing with users. We’ve still got a way to go before MoodleNet is exactly what community participants want from it, but we feel that in this initial testing period we’ve got a mandate to keep on iterating.
A big thank you to our two cohorts of testers, who have provided invaluable feedback. They still have access to MoodleNet beyond the testing period. We’ll be inviting more people to join at next month’s UK & Ireland MoodleMoot in Manchester, so why not join us there?
I’m MoodleNet Lead and, since the project’s inception, I’ve had lots of conversations with many different people. Once they’ve grasped that MoodleNet is a federated resource-centric social network for educators, some of them ask a variation of this question: Oh, I assume you’ll be using a star rating system to ensure quality content?
They are often surprised when I explain that no, that’s not the plan at all. I haven’t written down why I’m opposed to star rating systems for educational content, so what follows should hopefully serve as a reference I can point people towards next time the issue crops up!
However, this is not meant as my last word on the subject, but rather a conversation-starter. What do you think about the approach I outline below?
Wikipedia defines a rating system as “any kind of rating applied to a certain application domain”. Examples include:
Motion Picture Association of America (MPAA) film rating system
Rating system of the Royal Navy
A rating system therefore explains how relevant something is in a particular context.
Ratings in context
Let’s take the example of film ratings. Thanks to the MPAA film rating system, parents can decide whether to allow their child to watch a particular film. Standardised criteria (e.g. drugs / sex / violence) are applied to a film which is then given a rating such as G (General Audiences), PG (Parental Guidance), and R (Restricted). These ratings are reviewed on a regular basis, sometimes leading to the introduction of new categories (e.g. PG-13).
Despite the MPAA film rating system, many parents seek additional guidance in this area – for example, websites such as Common Sense Media which further contextualise the film.
In other words, the MPAA rating system isn’t enough. Parents also take into account what their child is like, what other parents do, and the recommendations of sites they trust such as Common Sense Media.
Three types of rating systems
As evident in the screenshot above, Common Sense Media includes many data points to help parents make a judgement as to whether they will allow their child to watch a film.
With MoodleNet, we want to help educators find high-quality, relevant resources for use in their particular context. Solving this problem is a subset of the perennial problem around the conservation of attention.
In other words, we want to provide the shortest path to the best resources. Using an adapted project management triangle, educators usually have to make do with two of the three of time, effort, and quality. That is to say they can minimise the time and cost of looking for resources, but this is likely to have a hit on the relevance of resources they discover (which is a proxy for quality).
Likewise, if educators want to minimise the time and maximise the quality of resources, that will cost them more. Finally, if they want to minimise the cost and maximise the quality, they will have to spend a lot more time finding resources.
The ‘holy grail’ would be a system that minimises time and cost at the same time as delivering quality education resources. With MoodleNet, we are attempting to do that in part by providing a system that is part searchable resource repository, and part discovery-based social network.
Simply providing a place for educators to search and discover resources is not enough, however. We need something more granular than a mashup of a search engine and status updates.
What kinds of rating systems are used on the web?
There are many kinds of rating systems used on the web, from informal approaches using emoji, through to formal approaches using very strict rubrics. What we need with MoodleNet is something that allows for some flexibility, an approach that assumes some context.
With that in mind, let’s consider three different kinds of rating systems:
Star rating systems
Best answer systems
1. Star rating systems
One of the indicators in the previous example of the Common Sense Media website is a five-star rating system. This is a commonly-used approach, with perhaps the best-known example being Amazon product reviews. Here is an example:
Should I buy this laptop? I have the opinion of 12 customers, with a rating of three-and-a-half stars out of five, but I’m not sure. Let’s look at the reviews. Here’s the top one, marked as ‘helpful’ by nine people:
So this reviewer left a one-star review after being sent a faulty unit by a third-party seller. That, of course, is a statement about the seller, not the product.
Averaging the rating of these two reviews obviously does not make sense, as they are not rating the same thing. The first reviewer is using the star rating system to complain, and the second reviewer seems to like the product, but we have no context. Is this their first ever laptop? What are they using it for?
Star rating systems are problematic as they are blunt instruments that attempt to boil down many different factors to a single, objective ‘rating’. They are also too easily gamed through methods such as ‘astroturfing’. This is when individuals or organisations with a vested interested organise for very positive or very negative reviews to be left about particular products, services, and resources.
Data mining expert Bing Liu (University of Illinois) estimated that one-third of all consumer reviews on the Internet are fake. According to The New York Times, this has made it hard to tell the difference between “popular sentiment” and “manufactured public opinion.”
As a result, implementing a star rating system in MoodleNet, a global network for educators, would be fraught with difficulties. It assumes an objective, explicit context when no such context exists.
2. Best answer approach
This approach allows a community of people with similar interests to ask questions, receive answers, and have both voted upon. This format is common to Stack Overflow and Reddit.
Some of these question and answer pages on Stack Overflow become quite lengthy, with nested comments. In addition, some responders disagree with one another. As a result, and to save other people time, the original poster of the question can indicate that a particular answer solved their problem. This is then highlighted.
The ‘best answer’ approach works very well for knotty problems that require clarification and/or some collaborative thinking-through of problems. The result is then be easily searched and parsed by someone coming later with the same problem. I can imagine this would work well within MoodleNet community discussion forums (as it already does on the moodle.org forums).
When dealing with educational resources, however, there is often no objective ‘best answer’. There are things that work in a particular context, and things that don’t. Given how different classrooms can be even within the same institution, this is not something that can be easily solved by a ‘best answer’ approach.
3. Like-based systems
Sometimes simple mechanisms can be very powerful. The ‘like’ button has conquered social networks, with the best-known example being Facebook’s implementation.
I don’t use Facebook products on principle, and haven’t done since 2011, so let’s look at other implementations.
Social networks are full of user-generated content. Take YouTube, for example, where 400 hours of video is uploaded every single minute. How can anyone possibly find anything of value with such a deluge of information?
In the above screenshot, you can see a search for one of my favourite topics, The Bolshevik Revolution. YouTube does a good job of surfacing ‘relevant’ content and I can also choose to sort my results by ‘rating’.
Here is the top video from the search result:
I don’t have time to watch every video that might be relevant, so I need a shortcut. YouTube gives me statistics about how many people have viewed this video and how many people subscribe to this user’s channel. I can also see when the video was published. All of this is useful information.
The metric I’m most interested in, however, and which seems to make the biggest impact in terms of YouTube’s algorithm, is the number of upvotes the video has received compared to the number of downvotes. In this example, the video has received 16,000 upvotes and 634 downvotes, meaning that over 95% of people who have expressed an opinion in this way have been positive.
If I want more information, I can dive into the comments section, but I can already see that this video is likely to be something that may be of use to me. I would add this to a shortlist of three to five videos on the topic that I’d watch to discover the one that’s best for my context.
Going one stage further, some social networks like Twitter simply offer the ability for users to ‘like’ something. A full explanation of the ‘retweet’ or ‘boost’ functionality of social networks is outside of the scope of this post, but that too serves as an indicator:
This tweet from the UN about a report their Global Education Monitoring report has been liked 72 times. We don’t know the context of the people who have ‘liked’ this, but we can see that it’s popular. So, if I were searching for something about migrant education, I’d be sure to check out this report.
Although both YouTube and Twitter do not make it clear, their algorithms take into account ‘likes’ and ‘upvotes’ within the context of who you are connected to. So, for example, if a video has a lot of upvotes on YouTube and you’re subscribed to that channel, you’re likely to be recommended that video. Similarly, on Twitter, if a tweet has a lot of likes and a lot of those likes come from people you’re following, then the tweet is likely to be recommended to you.
Interestingly, many Twitter users use the limited space in their bios to point out explicitly that their ‘likes’ are not endorsements, but used to bookmark things to which they’d like to return. In the past year, Twitter has begun to roll out bookmarks functionality, but it is a two-step process and not widely used.
So likes act as both votes and a form of bookmarking system. It’s a neat, elegant, and widely-used indicator.
What does this mean for MoodleNet?
So far, we have discovered that:
The ‘quality’ of a resource depends upon its (perceived) relevance
Relevant resources depend upon a user’s context
We cannot know everything about a user’s context
MoodleNet will implement a system of both taxonomic and folksonomic tagging. Taxonomic tags will include controlled tags relating to (i) language, (ii) broad subject area, and (iii) grade level(s). Folksonomic tags will be open for anyone to enter, and will autocomplete to help prevent typos. We are considering adding suggested tags via machine learning, too.
In addition to this, and based on what we’ve learned from the three rating systems above, MoodleNet users will soon be able to ‘like’ resources within collections.
By adding a ‘like’ button to resources within MoodleNet collections, we potentially solve a number of problems. This is particularly true if we indicate the number of times that resource has been liked by community members.
Context – every collection is within a community, increasing the amount of context we have for each ‘like’.
Bookmarking – ‘liking’ a resource within a collection will add it to a list of resources a user has liked across collections and communities.
Popularity contest – collections are limited to 10 resources so, if we also indicate when a resource was added, we can see whether or not it should be replaced.
As discussions can happen both at the community and collection level, users can discuss collections and use the number of likes as an indicator.
Sometimes the best solutions are the simplest ones, and the ones that people are used to using. In our context, that looks like a simple ‘like’ button next to resources in the context of a collection within a community.
We’re going to test out this approach, and see what kind of behaviours emerge as a result. The plan is to iterate based on the feedback we receive and, of course, continue to tweak the user interface of MoodleNet as it grows!
What are your thoughts on this? Have you seen something that works well that we could use as well / instead of the above?
This week, we are releasing MoodleNet v0.5 alpha, which includes one of our most-requested features: a mobile web view! We’ve also implemented a bunch of UI tweaks and bug fixes.
Note that, after testing using BrowserStack, pretty much every combination of mobile device and web browser works except Apple’s Safari and Microsoft’s Edge browsers. This is due to a combination of some issues around supporting web standards, unfortunately.
For the moment we suggest that the community use other, more standards-compliant browsers to access MoodleNet. Some excellent choices include Opera, Mozilla Firefox and Google Chrome.
We didn’t manage to sneak in an ‘activity’ view for this release, but we’re working on it this week. This will allow you to see everything that’s happened within a community recently (e.g. new user/resource/collection added, new discussion thread).
Think of well-known social networks such as Facebook and Twitter. They all have something in common: they’re centralised information silos, controlled by a single organisation. We wanted to do something very different with MoodleNet, a new resource-centric social network for educators. We wanted it to be federated, based on the latest technologies and approaches to community-building. This is in keeping with Moodle Core, open source software which is developed by Moodle Pty Ltd and customised by Moodle Partners and other organisations.
In 2018, after a lot of research and testing, we envisioned MoodleNet as a social network that not only anyone could join, but anyone could help run; a decentralised network to empower global knowledge sharing and collaboration. For this reason, MoodleNet had to not only be open source but also find a way to connect together users across different ‘instances’. It was at this point, we realised that ActivityPub, an awesome new protocol for federating apps (which recently became a W3C web standard), would be perfect for our needs.
CommonsPub: “let’s help everyone federate everything”
As ActivityPub is a new protocol, the ecosystem around it is still maturing. We looked for a generic ActivityPub server on which to build MoodleNet and, not finding one, set out to see if we could help build one. After all, the value of a federated social network increases with the number of nodes in the network!
Creating federated apps today is quite a challenge, as developers have to:
Become aware of the conventions used by existing implementations (to ensure interoperability)
Code that all up in their development language of choice (usually only bothering with the parts they need, resulting in many partial implementations)
Long story short, it can take months to create a functional beta version of a federated app.
From some conversations between Mayel and his friends at the Open Coooperative Ecosystem, he realised that it would be useful if there was a ‘hello world’ starter project to enable developers to build ActivityPub-based federated applications much more easily.
From this came the idea for CommonsPub: a project to create a generic ActivityPub server, as an extensible library/framework. It would give developers lots of common functionality out-of-the-box, so they can focus on the specific logic of their application. It was also beneficial to what we were trying to achieve with MoodleNet, so seemed like a real win-win situation.
Mayel began the process by finding and analysing virtually every federation app and implementation out there (with almost 70 documented projects to date, at various stage of development), looking into what software stacks they used, and the pros and cons of their approaches. From all of these cases, the architecture for CommonsPub began to take form.
We chose Elixir as the main backend language together with Phoenix, a modern framework similar to Ruby on Rails, but so resource-friendly that it can be run on a Raspberry Pi! It made sense to fork Pleroma as an initial starting point, as not only did Pleroma already have working federation, but it was built with a generic database schema which stores ActivityStreams data as-is.
Our original goal when Alex, an experienced Elixir developer joined the MoodleNet team, was to create a framework containing any useful reusable logic and then build MoodleNet’s custom application logic and client API endpoints on top of that. This was not a small undertaking, and proved to be so complex that, over time, Alex ended up rewriting most of the code so that very little (if any) of the Pleroma code remained.
Federation is fundamental for the aims of MoodleNet, but creating a generic library is out of scope
MoodleNet is a small team working on an innovation project using new technologies. As such, we have to make difficult decisions about where to channel our resources. You may be familiar with the Project Management Triangle, often referred to as the ‘fast, cheap, good’ triangle. A project, it is suggested, can only choose two of these.
Right now, our triangle looks more like this:
Given its importance, getting federation working for MoodleNet is now taking precedence over anything else. We’ve frozen all unrelated backend development until federation is ready. Given our resources and scope, MoodleNet could not achieve federation (i.e. launch on time with more than a single Moodle-run instance) at the same time making all the backend code generic in the way the CommonsPub project intended.
As a result, the MoodleNet team has decided not to continue contributing to the development of CommonsPub as a generic federated server. Instead, we are focusing on implementing federation as soon as possible in what is becoming a more MoodleNet-specific backend. To be clear, we’re forking the CommonsPub repository, and continuing our development in the MoodleNet repository, so that we can focus on delivering on our federated project roadmap without worrying about how generic the resulting code is.
CommonsPub is now in the hands of the free software community
CommonsPub continues to exist as a project, but will no longer benefit from development time from Moodle Pty Ltd. This, of course, is likely to affect the timeframe for CommonsPub becoming a viable foundation on which other projects can build their federated apps.
In other words, CommonsPub is now fully in the hands of the free software community. If you’re interested in what might come next for the generic library, please read the follow-up post at the CommonsPub website.
As MoodleNet progresses and the team get into more of a rhythm, we’ve started working in two-week sprints. For the next few weeks, up to the beta release at the UK & Ireland MoodleMoot, we have plenty to do!
Earlier this week we released MoodleNet v0.3 alpha in preparation for inviting a new cohort of testers. It includes the following new functionality, UI tweaks, and bug fixes:
‘Profile & settings’ to update name, description, and avatar
Guide to Markdown next to text input boxes
Tweaks to fonts and colours to improve accessibility
New approach to discussions, which now act more like threads
List of communities indicates number of collections contained by each
No longer have to refresh to see added community/collection
Can see all communities again (fixed pagination)
We’ve removed edit functionality from MoodleNet at the moment in preparation for moderation. In future, you’ll be able to edit and delete comments and resources you add, or those in a community you moderate.
Given the amount of time between now and the beta launch at the UK Moot, we’re going to focus on what we consider to be essential to the core value proposition of MoodleNet:
Federation — the ability to have separate instances of MoodleNet that can communicate with one another.
Mobile view — MoodleNet accessible and usable on mobile devices.
Moodle Core integration — add a resource from MoodleNet to a course in a Moodle course.
Thank you to our testers, who are doing a great job of asking questions, reporting bugs, suggesting functionality, and filling in surveys!
We’ve learned a lot from the first testing round of MoodleNet, which ends this week. Our focus has been on testing the value proposition, “Do educators want to join communities to curate collections of resources?” It’s early days, but it would appear that yes, they do!
The wealth of feedback we’ve received during the first testing period really has been invaluable. Our enthusiastic bunch of 100 testers have shown us what they prefer, through their use of MoodleNet, responses to surveys, and suggestions via Changemap. Happily, we’re not ‘wiping’ or ‘resetting’ the HQ instance, so we’re encouraging the 100 testers to use MoodleNet beyond this initial period.
As demonstrated in a previous update, over the last three weeks we’ve added a lot of functionality to MoodleNet, made many improvements to the user interface, and fixed a number of bugs. We’re looking forward to seeing how 150 additional testers respond to MoodleNet when they get started next week.
It’s now two months until our planned beta launch at the UK & Ireland MoodleMoot, so the team has some very important functionality to work on. Soon, MoodleNet will be:
Mobile — access MoodleNet on-the-go
Searchable — find communities, collections, and people across all of MoodleNet’s federated instances
Connected — import resources you discover on MoodleNet into courses in Moodle Core
Federated — join any instance of MoodleNet and interact with communities, collections, and other users across all instances
The MoodleNet team would like to thank the Moodle community for the encouragement and feedback we’ve received so far. We’re dedicated to creating an easy-to-use environment where educators can share, curate, and discuss!
Yesterday, we made our first major update to the version of MoodleNet currently undergoing initial testing. Not only did this update alter the look and feel of the interface, but it also added some useful new functionality and fixed some bugs reported by users via Changemap.
We’re a week into the initial testing of MoodleNet and are already getting some fantastic feedback from testers!
While there’s a long way still to go before we can open registrations, things are really starting to come together in terms of the user interface (UI) for MoodleNet.
The above screenshot was taken today. Even in this very initial version, the feedback we have had from testers has been mostly positive. Our anonymous survey to ask for their first impressions included responses such as “nice interface”, “attractive” and “clean and clear”.
Our designer and front end developer, Ivan Minutillo, isn’t content to rest on his laurels, however. The above screenshot is taken from our staging server and shows an iteration of the UI that we will make available to users over the next few days.
As you can see, there are many improvements, including:
Ivan hasn’t stopped there, either, though! Although the above mockup isn’t coded yet, this is the direction we are currently thinking of heading with MoodleNet. As you can see, the sidebar now includes ‘MoodleNet’ at the top, there is search functionality (which we will be doing across federated instances) and the whole experience feels much more refined.
Whether or not you’re part of the initial testing process, we’d love your feedback on this! Do you like what you see?
The most important test so far, however, starts next week. That’s the time when we’ll be putting MoodleNet in front of users for the first time. We’re testing the value proposition: “Do educators want to join communities to curate collections of resources?” This doesn’t mention federation. There’s no mention of mobile devices, fancy user interfaces, or machine learning. We’ve tried to create a very simple approach to test this basic value proposition.
It may turn out that users agree with this value proposition. They may think that, yes, joining communities to curate collections of resources is something they want to do. Alternatively, they may indicate that they prefer a different approach. Either way, this test is of vital importance; it makes no sense to continue along this particular path without a mandate from real-world users!
For those interested, but who aren’t part of the initial testing, here’s how it will proceed:
Successful applicants will have their email address whitelisted and be invited to sign up to a Moodle HQ-run instance of MoodleNet
Feedback from users during the testing process will be collected in two ways: via Changemap and through weekly surveys
New features will be rolled out during the testing process, as detailed on this milestone
If you missed the sign-up process this time around, or weren’t available for the first testing period, then don’t worry! You will have an opportunity to put your name forward again in a few weeks’ time.