Forum .LRN Q&A: Anyone working on Scorm ?

Collapse
Posted by Malte Sussdorff on
Is anyone working on a Scorm complient interface?

Yesterday I had a meeting with a potential partner of ours (javanti.org, formerly jtap.org), who informed me about the scorm standard. We have been discussing of interfacing the javanti software as a demo using scorm to a grading system within dotLRN. Talking about it, do we have a grading system, which allows people to be graded on items in the content repository?

So, before we delve into unchartered territories, maybe someone else wants to enlighten us, or tell us they are doing it. Unless it is alreadying existing is there any chance to have a grading system by middle of Jan 2003?

Collapse
Posted by Staffan Hansson on
Malte, I'm not quite sure what you mean by "grading system", but that term in connection with SCORM certainly points toward a recent discussion on the OpenACS forum about developing graded surveys and a Curriculum module that will eventually offer SCORM-compliant sequences.

Ola tells me that Survey currently uses the CR for its attachement answers only, but MIT Sloan (Caroline Meeks) has mentioned storing questions in the CR as a future enhancement. The Curriculum module we plan on developing will most definitely store its items ("activities" in IMS speak) in the CR.

Collapse
3: Grading System (response to 1)
Posted by Malte Sussdorff on
A grading system for me is a system where any application can allow professors or other defined people to give a grade to someone else for a single or multiple objects.

More to the point. If a student uploads his thesis, the grading system should allow the professor to give a grade or points for this object to the owner (aka the student). If a student answers to a multiple choice survey the grading system should get the information from the survey, how good the student was (aka, he gets a grade for it). If a student wants to run a learning application (e.g. like javanti), which offers to give a grade afterwards, dotLRN should provide the learning application with the necessary information about the student using scorm and allow that application to store a grade for that learning application.

In addition to that, I'd love to see grades weighted. In Germany some universities work with credit points, so you should be able to store a grade along with the number of points (or the percentage) it is worth. Furthermore there should be the possibility to have multiple people give a grade for one item.

So far I got the impression that you'd like to add grading to a specific module. I would like to see grading as a seperate module / service, which allows the grading to happen on any object that is stored in the content repository (why not give someone a grade for a posting he has done in the forums).

Collapse
Posted by Ola Hansson on
I don't have any opinion about where the grading functionality should lie; in survey or in another package (but I suspect Dave has). As far as Curriculum is concerned, it doesn't matter which service provides the grading answers.

Isn't there a difference between grading and rating? Grading would be when an educational designer sets up criterias for evaluating survey results. Rating, OTOH, would be when any user gets to pass judgement on various types of objects. If this is so, maybe a separate rating package would make sense (Lars has one, I know - ready or not, I don't know), while the ongoing development of "graded surveys" would still be motivated. Just my thoughts...

The way I understand SCORM it's about being able to export sequences (courses or curriculums), and not student data. Each system that imports the sequence via the exported (SCORM) XML manifest, e.g. a dotLRN website, must have its own LMS including grading system.

Collapse
Posted by Andrew Grumet on
It looks like the IMS project has written some relevant specs are part of their Question & Test Interoperability Specification. Look for the string "Results Reporting" on this page: http://www.imsglobal.org/question/index.cfm.
Collapse
Posted by Michael Feldstein on

The way I understand SCORM it's about being able to export sequences (courses or curriculums), and not student data. Each system that imports the sequence via the exported (SCORM) XML manifest, e.g. a dotLRN website, must have its own LMS including grading system.

That's not entirely accurate. First of all, SCORM *does* include considerations regarding student data (although you are correct that it must work in conjuniction with an LMS). The XML manifest is only half of the SCORM spec; the API is the other half, and it's where the rubber really meets the road. One of the most common uses of SCORM today (as opposed to the stuff that it theoretically can do but that nobody uses it for) is to tie information about application state (in this case, it's a courseware application) with user data that's stored in the LMS, e.g., whether the user has taken the pretest, what page in the course the user was on when s/he last exited the course, etc.

Although the SCORM manifest does contain some sequencing information, practally speaking the sequencing part of the spec really isn't done yet. SCORM 1.3, which is due out early next year, will include more robust sequencing. For now, think of SCORM as providing a way to associate courseware state with a particular user in the LMS and a way to share metadata about particular content segments (or "learning objects") within the course. Note that this second function is a necessary but not sufficient condition for robust sequencing logic.

Collapse
Posted by Andrew Grumet on
Upon further reading, I see that the IMS specs are pretty detailed.  There's a tiny little section of the data model devoted to scores; the rest has to do with representing the assessee's responses to questions.

Stepping back for a moment, I think the challenges are to figure out a) the data model and b) what API calls to expose.  Acknowledging the simplest case solution ("create table object_grades (object_id references acs_objects, grade integer)") it would seem reasonable to look to organizations like IMS and SCORM, who have done all kinds of research to figure out what educators want, and build on their most relevant findings.

Michael, I haven't looked at the SCORM stuff yet.  Care to make any comments about how to best use IMS versus (or as a complement to) SCORM?

Collapse
Posted by Michael Feldstein on
SCORM is, for the most part, a subset of the IMS specifications, and it seems to be the subset that has been pounded on the most. IMS is tempting because they dig into interesting problems first (e.g., the test and sequencing specs)but the safe path to take is to stick with SCORM.
Collapse
Posted by Dave Bauer on
Regarding grading. If we need to grade objects besides surveys, it looke like we need a general LMS-grading package. This package would provide a data model and UI to allow grades to be attached to any acs object.

I really like the idea of service based packages used to build up an application.

Collapse
Posted by Ola Hansson on
That's not entirely accurate. First of all, SCORM *does* include considerations regarding student data (although you are correct that it must work in conjuniction with an LMS). The XML manifest is only half of the SCORM spec; the API is the other half, and it's where the rubber really meets the road.
Yes I agree, Michael, and that was actually my understanding too 😉.

What I meant to say, in my reply to Malte, was that the SCORM export/import manifests does not deal with _exporting_ of student (state) data to the other application (javanti in this case). In other words; you cannot resurrect a complete course together with its users by means of SCORM in a remote LMS, only the course itself, right?

Collapse
Posted by Andrew Grumet on
One of the most common uses of SCORM today (as opposed to the stuff that it theoretically can do but that nobody uses it for) is to tie information about application state (in this case, it's a courseware application) with user data that's stored in the LMS, e.g., whether the user has taken the pretest, what page in the course the user was on when s/he last exited the course, etc
Michael, I think you are referring to the stuff in section 3.4.4 of this doc: http://www.adlnet.org/ADLDOCS/Documents/SCORM_1.2_RunTimeEnv.pdf, e.g. cmi.core.score and the like. Is that correct?
Collapse
Posted by Michael Feldstein on
Ola, if you mean that SCORM does not provide a complete data model or tagging schema for identifying (and therefore exporting) user information, then you are correct. There are a few places you can look to find standards of this sort, although I use the term "standard" fairly loosely here. First, there's the AICC specification, which is a sprawling standard that covers everything imaginable--poorly. This standard is actually about 10 years old and yet no vendor that I know of has completely implemented it and no two vendors seem to implement the same subset of it. If AICC were decent then there would be no reason for SCORM to exist. Nevertheless, most LMS vendors have partial implementations of AICC in their products.

You can also look at the IMS standards. IMS and, by extension, SCORM, have generally tried to extract the worthwhile stuff out of AICC when possible. The "cmi.foo" API calls that Andrew sites in the SCORM spec, for example, are AICC-derived. (And yes, Andrew, that was exactly the portion of the spec that I was talking aobut.) You might want to look at the "Learner Information" and "Enterprise" specifications. The thing is, though, I have no idea (a) how widely implemented those "standards" are or (b) whether they're any good. So caveat emptor.

Collapse
Posted by Andrew Grumet on
The SCORM runtime stuff is a good read. It defines a data model and object-centric api which kinda sorta maps into OACS. They wrote it from the POV of a client-resident javascript object communicating back to a server-resident LMS. But the ideas might carry over to e.g. a server-resident survey package reporting a student's score back to the server-resident LMS. Instead of
// Javascript sample code from SCORM runtime spec.
LMSSetValue("cmi.core.score.raw","85");
we would have something like
# Possible OACS/Tcl.
# Here the object_id would uniquely identify
# to the student being scored.
lms::set_value [                 
  -object_id $object_id          
  -element "cmi.core.score.raw"  
  -value 85 ]
The only substantial difference here is that the object reference is somehow implicit in the Javascript example.

Given that, unlike SCORM, we're not assuming that our learning object is client-side, it's not clear to me yet what the right abstractions are (the Tcl package example above is just for illustrative purposes and not a proposed design).

Also, SCORM doesn't really address use cases that live above individual learning objects. This includes Malte's case where you want to assign relative weights to different learning objects. Perhaps that would just be an add-on (or maybe IMS defines it).

One piece of good news is that the SCORM scoring stuff adds only marginally to the simplest possible case. They propose three fields: raw score, max possible score, and min possible score, where all three numbers are assumed to be between 0 to 100. That at least lends support to the notion that the grading system need not be terribly complicated.

Collapse
Posted by Michael Feldstein on
If your specific focus is tests, then SCORM may not be the right standard. It handles basic score reporting stuff but doesn't support weighting, doesn't support test question banks, doesn't support adaptive testing, etc. The only standard I know of that does this is the IMS's QTI standard, which is used by most of the very few server-side test engines out there (QuestionMark Perception being by far the most common). The trouble is, the few people I know who have actually looked at it closely all say that it sucks. The IMS does provide a "lite" version of the spec; maybe that would be appropriate.
Collapse
Posted by Ola Hansson on
Andrew, I read the ADL (NATO's standard) pdf and as you point out it is based on (and mandates) a JavaScript implementation. However, it describes the data model and the API in a detailed way, which is nice, and we should be able to take various pieces out of it. The IMS specification and description of SCORM, OTOH, doesn't specify which programming language to choose and appears rather to assume a server side API.

Let me suggest that you (and other interested parties) take a peak at IMS Simple Sequencing Content Developer's Guide which talks about grading/tests in tandem with simple sequencing, albeit in a not very detailed way. It may prove strategic to form one's opinion about the "big picture" (LMS) of which grading will be a substancial part. AFAICT the two concepts - grading system / graded survey and LMS / curriculum - go hand in hand (although they can also be used separately). So, that document may not be as off topic as it may seem. I found it quite easy to read, too ...

As mentioned by folks in other threads, methods of the survey/grading package that will answer questions and/or perform actions on behalf of the curriculum/LMS should be exposed to the same by use of service contracts to avoid unnecessery package dependencies and to encourage code reuse, etc.

Example contract for LMS/Curriculum:

create function inline_1()
returns integer as '
DECLARE
BEGIN

PERFORM acs_sc_contract__new (
           'Survey',                                -- contract_name
           'Survey Information Provider'            -- contract_desc
);
PERFORM acs_sc_msg_type__new (
           'Survey.GetValue.InputType',
           'object_id:integer,student_id:integer,element:string'
);
PERFORM acs_sc_msg_type__new (
           'Survey.GetValue.OutputType',
           'value:string'
);
PERFORM acs_sc_operation__new (
           'Survey',                                -- contract_name
           'GetValue',                              -- operation_name
           'Get SCORM element value',               -- operation_desc
           'f',                                     -- operation_iscachable_p
           3,                                       -- operation_nargs
           'Survey.GetValue.InputType',             -- operation_inputtype
           'Survey.GetValue.OutputType'             -- operation_outputtype
);
Example service contract implementation in Survey:
impl_id := acs_sc_impl__new (
              ''Survey'',
              ''GradedSurvey'',
              ''survey''
);

foo := acs_sc_impl_alias__new (
              ''Survey'',
              ''GradedSurvey'',
              ''GetValue'',
              ''lms::get_value'',
              ''TCL''
);

This is an (untested) example of how the LMS/curriculum package would get the value which Andrew set for the "cmi.core.score.raw" element:
set value [acs_sc_call Survey GetValue [list $object_id $user_id "cmi.core.score.raw"] GradedSurvey]
If we assume that there is a method in survey that is called like this...:
# Possible OACS/Tcl.
lms::get_value [                
  -object_id $object_id
  -student_id $student_id         
  -element "cmi.core.score.raw"]
... $value should now contain the value "85".
Collapse
Posted by Michael Feldstein on
One of the goals of SCORM is to make courseware plug-and-play with any LMS. Administrators of online learning want to know that the logic embedded in their courses--including the ability to store application state and student progress information on the server--will just work seamlessly should the organization migrate to a different LMS. (SCORM stands for "Sharable Content Object Reusability Model.") I believe this is one of the reasons why they chose javascript for their API. Likewise, I believe (though I'm not sure) that any IMS standards related specifically to instructional content (as opposed to, say, information about the students)also assumes javascript for the API. This would probably include the Simple Sequencing spec (particularly since it is scheduled to be rolled into SCORM 1.3 shortly) but not necessarily (for example) the Enterprise spec.

By the way, I believe that Berklee has built an LMS with a SCORM-compliant data model that they are eventually planning on rolling into the dotLRN distribution.

Collapse
Posted by Andrew Grumet on
Ola, I'll check out that simple sequencing spec.

Malte, I guess the answer to your question is, "nobody's working on it but a few folks are willing to spend time reading and commenting on specs."  At this point my own interest was to do a little research into the problem space (and the discussion has been really helpful!).  Perhaps the others might be interested in investing additional cycles to design/coding if it's something you're planning to go after.    What are your plans (if you even know them 😉 )?

Collapse
Posted by Radamanthus Batnag on
This may seem to be a stupid question, but - why should the grades be limited to items stored in the content repository? In such a system, how can recitation grades be recorded?