Forum .LRN Q&A: Report: An Evaluation of Open Source CMS Stressing Adaptation Issues.

This report looks at the ability of open source course management systems to adapt themselves to the needs of each learner.

None of the systems is as good as the authors would like. Moodle makes out the best. .LRN seems to be tied for second place with some others.

http://moodle.org/other/icalt2005.pdf

The report says .LRN is poor in

- installation
- assistance (help system, documentation)
- sychron. communication (chat, conferences)
- collaboration

as well as in

- user friendliness
- support
- user management
- personal user profile
- tests & learning objects

Thus we have room for improvement in the field of usability, groupware and learning objects.

Let's go for it :)

The Big Picture:

Moodle is winning a lot of these studies mostly based on a good UI. They are an open source project. We can and should steal their UI ideas, we can steal the icons and even the html if its useful. Everyone please, as you are designing functionality, especially for anything to do with learning objects, look at Moodle. Solution Grove/Zill has a test system up. Email me if you want access.

http://moodle.sgsandbox.com/

Detailed analysis

I find the little symbols confusing so I’m going to translate them into numbers:
0 = 0
| = 1
+ = 2
# = 3
* = 4
E = 5

Forums: We got 3, Moodle got 4. Everyone seems to love the little portraits in Moodles’ forums. Personally I think the “right” format and features of a forum depend a great deal of usage, population using it, and personal taste. I think the right way to sell our forums is to emphasis adaptability and customization. That said, we could do worse then putting little portriats or icons in our out of the box dotLRN forums. The market seems to like it.

Chat: We got 0, Moodle got 4. Why did we get 0? I’ve seen lots of posts about Chat? Why are people unable to evaluate our various chat options? Its very sad to not get credit for an area we do have solutions in.

Mail and Messages: We got 1, Moodle got 0. I wonder where our 1 came from? I am never sure what people are looking for in this category? Sending Bulk mail? Webmail? Solution Grove is working on an internal messaging system. We should be sure this makes it into the dotLRN marketing materials.

Announcements: We got 2, Moodle got 2. Max value is 2

Conferences: We got 0, Moodle got 0. I wonder what this is?

Collaboration: We got 0, Moodle got 2. How could we get 0 on collaboration? What are they measuring? This must be some failing of our marketing material or documentation.

Syncronous & Asynchn. Tools. We got 0. Moodle got 3. What is this?

Tests: We got 1, Moodle got 4. I wonder if they evaluated survey or assessment?

Learning Materials and Exercises: We got 0 on both , Moodle got 4 and 3. Did they evaluate LORS?

Other creatable LOs: We got 2, Moodle got 2.

Importable LOs. We got 1, Moodle got 3. Did they look at LORS?

Tracking and Statistics: We got 0. Moodle got 3. There is a lot of new functionality out there for tracking. The user views from Jeff Davis and Xargs and tracking packages from E-Lane. We need to document what we can do and get it into our standard installs. We actually have very strong capabilities here.

Identification of online users. Moodle and dotLRN got 2, the maximum score.

Personal User Profiles: We got 1, Moodle got 2. I wonder which of our zillion ways of doing this they evaluated? What you get on community-member page? Photobook? If we marketed dotFOLIO as our Personal User Profiles we would blow away the competition.

User Friendliness. We got 1. Moodle got 4. We should steal ideas from Moodle.

Support: We got 1, Moodle got 4. I wonder how they measured this?

Documentation: We got 2, Moodle got 2, Max is 2. This is a surprise to me.

Assistance: We got 0, Moodle got 4. I wonder what this is?

Adaptability: We got 2, Moodle got 4. I bet we can steal this too.

Personalization: We got 2, Moodle got 2. Max was 4. Hmm I wonder what they liked in the other platforms?

Extensibility: We got 3, Moodle got 3, Max is 3.

Adaptivity: We got 0. Moodle got 1. What is this?

Standards: We got 2, Moodle got 3.

System Requirements: We got 2, Moodle got 2, Max was 2.

Security: We got 3, Moodle got 2.

Scalability. We got 2, Moodle got 2. Max was 2.

User Management: We got 1, Moodle got 1, Max was 3. I wonder what they liked in the other systems?

Authorization Management: We got 0, Moodle got 1. We have LDAP and PAM support. Why did we get 0?

Installation of the platform: We got 0, Moodle got 1. We have lots of new installers. We need to be sure evaluators find them.

Administration of courses: We got 2, Moodle got 1.

Assessments of Tests: We got 0, Moodle got 1.

Organization of course objects: We got 2, Moodle got 1. Max was 3. I wonder what they were looking for here.

Gustaf, do you know of any way to get more details on this study? Maybe get answers to some of my questions on exactly what they were looking for?

Summary

These are Categories where I believe our documentation/Marketing failed us. In these areas I believe we have stronger functionality then these evaluators seemed to be able to find or evaluate.

• Forums
• Chat
• Collaboration
• Learning Materials
• Importable Learning Objects
• Tracking and Statistics
• Support
• Authorization Management

In some of these areas packages like LORS are not officially “released”. A long standing problem is an unclear release process and a higher bar to “release” a package then other open source products do. Is this higher bar helping us somewhere else? Because its definitely hurting us in these comparison studies. I’d like to call on the .LRN Executive committee to revisit the official release process for .LRN. No matter what its crucial that it is clear and in writing. I personally believe, .LRN should move from the current “certified/not certified” terminology to OpenACS’ “Maturity Levels” so that new code can be “released” earlier under a low maturity and still qualify to be evaluated in these types of studies.

I’d like to call on the maintainers of the dotLRN.org site to evaluate the site with these specific areas in mind and be sure we are putting our best foot forward and clearly presenting the extent of our strengths and functionality in these areas.

I would also refer you the post Ben made here: https://openacs.org/forums/message-view?message_id=350041. If the info is on the dotLRN.org site is it being “Scented” properly. These evaluations are a guide as to how our customers think about things and the words they use to describe the functionality.

These are categories where I believe we have something to learn/steal from Moodle
• Fourums
• Chat
• Learning Materials
• Exercises
• Tests
• User Friendliness

If you are doing .LRN work in one of these areas please take the time to look at Moodle’s implementation. Again, we are hosting a moodle sandbox. If you want access let me know.

Hi Caroline,

I was lucky enough to be in at this conference where this paper got presented by Sabine (author). It was presented at the IEEE Learning Technologies conference (ICALT) in Taiwan in the Learning Design track.

This paper was probably the most exciting paper in the track and as you can see there it has quite a thorough analisys of features and functionalities.

However, bare in mind that the main focus of this paper is adaptability and personalisation mainly and it does an evaluation of these platforms based on them. It's *not* a functional evaluation based on pedagogical value.

After Sabine's presentation I had a spoke with her about .LRN and some of the packages that .LRN has for learning materials (survey, assessment, LORS, etc) as well as the work that the UNED fellows have been doing with Alfanet and all. Of course she was unware of all these.

I bet the ranking of .LRN would have been much better if she would have seen these packages in her out-of-the-box .LRN installation.

At any rate, I strongly agree with Caroline about looking at other platforms to enhance .LRN usability and features. In addition, it will be great to include more teachers and pedagogy people in the .LRN packages design.

It seems to me that we, in the .LRN community, tend to be good technologist, but we might not have a lot of teachers and pedagogy fellows involved in our design?

After being involved in the Moodle community for a bit, the main difference that I noticed is that their community is being driven by teachers mainly. And they have a great deal of say on the features that are to be implemented. There are 341024 teachers using Moodle according to Moodle Stats. I think that's what makes the difference for Moodle.

Ernie

Just to be clear. I think Moodle and .LRN have slightly different markets. I think .LRN is more focused on organizations then individual teachers and we have strengths in terms of being able to support large organizations and diverse organizational structures. Our community (both .LRN and OpenACS) has enjoyed a large long-term organizational level participation and I want that to continue. There is no reason we can’t do that and have happy teachers too. Although, it seems easier to steal Moodle’s UI then replicate their community. Our community gives us some real advantages. Let’s combine our advantages with their UI.

Ernie, I have a bunch of places above where I didn’t really understand what they were looking at. Can you enlighten me on any of them based on her talk?

There are many many things we can do to improve .LRN, but getting what we have into the hands of evaluators seems like the low hanging fruit. As Ernie Says:

After Sabine's presentation I had a spoke with her about .LRN and some of the packages that .LRN has for learning materials (survey, assessment, LORS, etc) as well as the work that the UNED fellows have been doing with Alfanet and all. Of course she was unware of all these.
I bet the ranking of .LRN would have been much better if she would have seen these packages in her out-of-the-box .LRN installation.
Maybe we could agree that all packages that start with dotlrn-* (along with all dependencies) which are at least maturity level 1 get into the evaluation distribution (with a clear mentioning that though this is functioning code, it does not meet our high standards of internationalization and database independency). And then we could have a proper distribution with maturity level 3.
Caroline: I have seen some large installations of Moodle and it is also suitable for organisations. It's just easy: Moodle does a better job - at least for now :)
I think it's simplistic to say that Moodle is gaining mind share just because of UI. In addition to the UI, it has a number of strengths, among these are:

a) simple installer;
b) excellent documentation;
c) user-driven community;
d) stable platform;
e) clear roadmaps;

OpenACS/.LRN is not even close. We continue to suffer from obscurity, fragmentation, and instability.

The OpenACS core is solid. Otherwise, OpenACS/.LRN is at best devolving into a confusing hodge-podge of poorly maintained packages. Cleaning up the UI and "better marketing" ain't gonna solve the fundamental weaknesses.

"I personally believe, .LRN should move from the current “certified/not certified” terminology to OpenACS’ “Maturity Levels” so that new code can be “released” earlier under a low maturity and still qualify to be evaluated in these types of studies." -- Caroline Meeks

I agree with this but there is no infrastructure in OpenACS to support this. The OCT and the leadership of OpenACS needs to specify clearly what these maturity levels mean and how they would be enforced. At the package level the current state of affairs is a hodge podge of random contributions with no maintainers and no documentation.

Al, the maturity level has fairly specific meaning. See tip 47:

https://openacs.org/forums/message-view?message_id=161393

Instead of creating yet another certification, maybe we should just offer a set of metrics (and a framework for package developement coordination) to highlight the various strengths and weaknesses in a systematic, thoughtful way for end-users.

Let's try giving each package it's own wiki project page, and have a summary table that gets automatically updated for the most part. Here's an example of what it could include.

https://openacs.org/storage/view/proposals/packages-report.html

Maybe move the manually editable columns of that table into the package wiki page for convenience.

Each package's wiki page would act as a project page, having the usual information:

description, news/status, notes, strengths, weaknesses, plans, project participants, links to package info in other places, such as cvs browser, package docs, forum threads, demo(s) etc.

side note: It seems to me that at some point the maturity tag was an index of deployment (x number of known deployments). Yet, now it seems to summarize some of the data points, suggesting that those manual data points should definitely move to the package wiki pages.

Following through on the Maturity Model (MM) idea is one of the most important things we can do as a project. I hope to meet with Caroline soon and we will kick around the idea some more.

I agree with Torben that we need to find a way to expose some metrics that will give users and developers a clear idea of the state of readiness and quality of the different packages.

I remain bullish on OpenACS. As a framework it remains superior to anything else that's out there, including RoR. If we can find a way to tackle the QUALITY issue by implementing MM, the project will enjoy a significant boost.

Some quick links on CMM and Open Source Projects. Hope this helps stirr the discussion.

http://hoot.tigris.org/Motivation.html

http://www.openbrr.org/

Malte, that openbrr link is quite useful (and supports many of my prior arguments about adding an end-user and an admin forum to this site).

The BRR criteria should be expressed in the Why Openacs document (with supporting metrics and their references).

The above proposed packages-report seems like an immediate way to implement the essentials of BRR metrics given that this software/community has a narrow set of highly valued core development objectives (scalable, extendible, secure etc) that naturally reduce the selection of remaining metrics to about 7 --their suggested maximum to use when screening.

Supplying the data via a packages-report also recognizes that any specific BRR number is ultimately subjective to a particular evaluation process. This software is more likely to get into any related evaluation when the data is easily available.

Malte, Thank you for providing the excellent links.