Forum .LRN Q&A: Re: Report: An Evaluation of Open Source CMS Stressing Adaptation Issues.

The Big Picture:

Moodle is winning a lot of these studies mostly based on a good UI. They are an open source project. We can and should steal their UI ideas, we can steal the icons and even the html if its useful. Everyone please, as you are designing functionality, especially for anything to do with learning objects, look at Moodle. Solution Grove/Zill has a test system up. Email me if you want access.

http://moodle.sgsandbox.com/

Detailed analysis

I find the little symbols confusing so I’m going to translate them into numbers:
0 = 0
| = 1
+ = 2
# = 3
* = 4
E = 5

Forums: We got 3, Moodle got 4. Everyone seems to love the little portraits in Moodles’ forums. Personally I think the “right” format and features of a forum depend a great deal of usage, population using it, and personal taste. I think the right way to sell our forums is to emphasis adaptability and customization. That said, we could do worse then putting little portriats or icons in our out of the box dotLRN forums. The market seems to like it.

Chat: We got 0, Moodle got 4. Why did we get 0? I’ve seen lots of posts about Chat? Why are people unable to evaluate our various chat options? Its very sad to not get credit for an area we do have solutions in.

Mail and Messages: We got 1, Moodle got 0. I wonder where our 1 came from? I am never sure what people are looking for in this category? Sending Bulk mail? Webmail? Solution Grove is working on an internal messaging system. We should be sure this makes it into the dotLRN marketing materials.

Announcements: We got 2, Moodle got 2. Max value is 2

Conferences: We got 0, Moodle got 0. I wonder what this is?

Collaboration: We got 0, Moodle got 2. How could we get 0 on collaboration? What are they measuring? This must be some failing of our marketing material or documentation.

Syncronous & Asynchn. Tools. We got 0. Moodle got 3. What is this?

Tests: We got 1, Moodle got 4. I wonder if they evaluated survey or assessment?

Learning Materials and Exercises: We got 0 on both , Moodle got 4 and 3. Did they evaluate LORS?

Other creatable LOs: We got 2, Moodle got 2.

Importable LOs. We got 1, Moodle got 3. Did they look at LORS?

Tracking and Statistics: We got 0. Moodle got 3. There is a lot of new functionality out there for tracking. The user views from Jeff Davis and Xargs and tracking packages from E-Lane. We need to document what we can do and get it into our standard installs. We actually have very strong capabilities here.

Identification of online users. Moodle and dotLRN got 2, the maximum score.

Personal User Profiles: We got 1, Moodle got 2. I wonder which of our zillion ways of doing this they evaluated? What you get on community-member page? Photobook? If we marketed dotFOLIO as our Personal User Profiles we would blow away the competition.

User Friendliness. We got 1. Moodle got 4. We should steal ideas from Moodle.

Support: We got 1, Moodle got 4. I wonder how they measured this?

Documentation: We got 2, Moodle got 2, Max is 2. This is a surprise to me.

Assistance: We got 0, Moodle got 4. I wonder what this is?

Adaptability: We got 2, Moodle got 4. I bet we can steal this too.

Personalization: We got 2, Moodle got 2. Max was 4. Hmm I wonder what they liked in the other platforms?

Extensibility: We got 3, Moodle got 3, Max is 3.

Adaptivity: We got 0. Moodle got 1. What is this?

Standards: We got 2, Moodle got 3.

System Requirements: We got 2, Moodle got 2, Max was 2.

Security: We got 3, Moodle got 2.

Scalability. We got 2, Moodle got 2. Max was 2.

User Management: We got 1, Moodle got 1, Max was 3. I wonder what they liked in the other systems?

Authorization Management: We got 0, Moodle got 1. We have LDAP and PAM support. Why did we get 0?

Installation of the platform: We got 0, Moodle got 1. We have lots of new installers. We need to be sure evaluators find them.

Administration of courses: We got 2, Moodle got 1.

Assessments of Tests: We got 0, Moodle got 1.

Organization of course objects: We got 2, Moodle got 1. Max was 3. I wonder what they were looking for here.

Gustaf, do you know of any way to get more details on this study? Maybe get answers to some of my questions on exactly what they were looking for?

Summary

These are Categories where I believe our documentation/Marketing failed us. In these areas I believe we have stronger functionality then these evaluators seemed to be able to find or evaluate.

• Forums
• Chat
• Collaboration
• Learning Materials
• Importable Learning Objects
• Tracking and Statistics
• Support
• Authorization Management

In some of these areas packages like LORS are not officially “released”. A long standing problem is an unclear release process and a higher bar to “release” a package then other open source products do. Is this higher bar helping us somewhere else? Because its definitely hurting us in these comparison studies. I’d like to call on the .LRN Executive committee to revisit the official release process for .LRN. No matter what its crucial that it is clear and in writing. I personally believe, .LRN should move from the current “certified/not certified” terminology to OpenACS’ “Maturity Levels” so that new code can be “released” earlier under a low maturity and still qualify to be evaluated in these types of studies.

I’d like to call on the maintainers of the dotLRN.org site to evaluate the site with these specific areas in mind and be sure we are putting our best foot forward and clearly presenting the extent of our strengths and functionality in these areas.

I would also refer you the post Ben made here: https://openacs.org/forums/message-view?message_id=350041. If the info is on the dotLRN.org site is it being “Scented” properly. These evaluations are a guide as to how our customers think about things and the words they use to describe the functionality.

These are categories where I believe we have something to learn/steal from Moodle
• Fourums
• Chat
• Learning Materials
• Exercises
• Tests
• User Friendliness

If you are doing .LRN work in one of these areas please take the time to look at Moodle’s implementation. Again, we are hosting a moodle sandbox. If you want access let me know.

Hi Caroline,

I was lucky enough to be in at this conference where this paper got presented by Sabine (author). It was presented at the IEEE Learning Technologies conference (ICALT) in Taiwan in the Learning Design track.

This paper was probably the most exciting paper in the track and as you can see there it has quite a thorough analisys of features and functionalities.

However, bare in mind that the main focus of this paper is adaptability and personalisation mainly and it does an evaluation of these platforms based on them. It's *not* a functional evaluation based on pedagogical value.

After Sabine's presentation I had a spoke with her about .LRN and some of the packages that .LRN has for learning materials (survey, assessment, LORS, etc) as well as the work that the UNED fellows have been doing with Alfanet and all. Of course she was unware of all these.

I bet the ranking of .LRN would have been much better if she would have seen these packages in her out-of-the-box .LRN installation.

At any rate, I strongly agree with Caroline about looking at other platforms to enhance .LRN usability and features. In addition, it will be great to include more teachers and pedagogy people in the .LRN packages design.

It seems to me that we, in the .LRN community, tend to be good technologist, but we might not have a lot of teachers and pedagogy fellows involved in our design?

After being involved in the Moodle community for a bit, the main difference that I noticed is that their community is being driven by teachers mainly. And they have a great deal of say on the features that are to be implemented. There are 341024 teachers using Moodle according to Moodle Stats. I think that's what makes the difference for Moodle.

Ernie