Hello there,
Hope you all are doing fine.
For the past few weeks, we've been working on the SCORM
implementation for OpenACS and there has been some interesting progress and Carl
has encouraged me 😊 to present to you some of the steps we have taken.
SCORM can be a bit tricky, but just to make sure you
fully understand what we do and why it is like that and not like this, have a
look at this small document I put together:
"SCORM
1.3 for Random Strikers". This document simplifies the specs and puts it in
a really colloquial language (you've been warned). If you read this small doc, the rest it should
make a bit more sense. if you still have questions, please don't hesitate to ask
me.
As I mentioned in my
last posting,
we have divided the project in to three parts:
* Import/export functionality: importing IMS and SCORM compliant
courses and content aggregations.
Progress: we have finished the import functionality. We
can import IMS and SCORM Content Aggregation compliant courses as well as all their
million of fields of metadata. So SCORM and IMS packages can be extracted and
their manifest files (XML) parsed and imported. We have used tDOM since some of
these XML documents can be quite large... and tDOM parses like a beauty.
However, we couldn't use XPath which would have been really cool (and easy) due
to a problem with namespaces
and different interpretations from various content
authoring tools on how to create these XML files. Although, the way we have done
it, it is the safest way. It took some time but it works really well. Export,
supposedly should be the inverse case... (so I hope 😊 We've been talking with
Caroline Meeks about exporting dotLRN content to IMS Content Packages so they
can be imported later into OCW. However it
seems that specifications haven't been yet finalized yet. We hope we can help
Caroline here since we are going to have an export functionality for SCORM and
IMS in any case.
* Managing courses within OpenACS
Progress: the first part is a sound design of how and
where we are going to store the learning objects and their massive amount of
metadata. We currently are considering the content repository (CR) to store:
content aggregations (courses), SCO/As (learning objects), assets (cluster of
files or one file), and files. Each one of them would be a content type and
would work in the following way:
A SCORM root folder (SCORM Repository) where all the content aggregations
would live. A content aggregation would be a content type that serves as a
container for SCO/As. By the same token, SCO/As are containers for assets, and
assets contains files. So I think this would look like this
SCORM Repository (SCORM root folder)
|
|--- [Content Aggregation]
|
|-----
[SCO/As]
|
|---> [Assets]
|
|---> [files]
So in this way we have content types defined as:
Content aggregation
SCO/As that have an associated parent (content aggregation)
Assets that have an associated parent (SCO/As)
Files that have an associated parent (files)
All these content types share the same sc_metadata_table (SCORM
metadata tables) that contain all
their metadata attributes. Files are the only physical entity (the others are
containers) and we will store them on the file system somewhere under the
Content Aggregation ID folder (/mounting point/SCORM-repository/@cont_aggre_id/folder/etc..)
Then we will have to work on the index.vuh to see how we are going to source
these files, but that is going to be a different issue
that is being discussed
here.
We still are trying to figure out a couple of things that have to do with the
reusability of learning objects in different courses. You see, a learning object
(SCO/A) might have to be shared with another courses (content
aggregations)... so an SCO/A might belong or (be linked?) to two or more
different content aggregations. Example:
SCORM Repository (SCORM root folder)
|
|--- [Content Aggregation: DB design]
|
|----- [SCO/As]
|-> [Chapter 1: Using the "Select" statement]
|
|---> [Assets]
|
|---> [select.html, select.jpg]
|---> [from.html, from.jpg, where.jpg]
SCORM Repository (SCORM root folder)
|
|--- [Content Aggregation: OpenACS for random strikers]
|
|----- [SCO/As]
|-> [Chapter 1: Intro to OpenACS]
|-> [Chapter 2: What is SQL]
|-> [Chapter 2: Using the "Select" statement] (LINK or ?)
|---> [Assets]
|
|---> [select.html, select.jpg] (link?)
|---> [from.html, from.jpg, where.jpg] (link?)
So now we are studying how we are going to link the SCO/As to several content
aggregations without causing many problems or minimizing scalability issues.
If you want to have a look how the courses or content aggregations are
packaged here are some
good examples.
Future steps
- Finalize export functionality
- Integration tool of SCORM course into dotLRN classes and communities
- Managing courses tool
- Runtime environment (including delivering content based on the learner
-user- preferable locale, of course if the course support his/her locale -Tils
idea)
- IMS Simple Sequencing Engine (hopefully from Polyxena)
I have to thank my Indian-Austrian fellow Tils , and my Mexican/German friend Denis for their help on this. Please feedback and comments are
more than welcome.
Thank you,
Ernie