Forum .LRN Q&A: Re: Request for advice from the OCT
I think the schema for handling conditional processing within Assessment we defined here makes lots of sense and provides a relatively elegant solution within the Assessment package. I thought Malte's post of 13 June made the internal/external distinctions clear and compelling.
It's important for Assessment not to be tightly bound to other education-specific packages (or any other vertical app packages for that matter). Maybe I'm misunderstanding, but it sounds as if that's what is being suggested in the most recent posts in this thread.
The only reason I got into this argument was the fact that I had been reading up on the IMS specs and reading Ernies statement about LORS being an API for storing content in a structured manner. And I wanted to understand if there would be a compelling reason to move as_items and such into LORS, especially as Matthias has been suggesting this. And this should be decided *now*, before we start moving the assessment to be based on the CR. To be honest, I don't see a way how the needs for information storage (general as_item information, display_information and type_specific information) of assessment can be easier remodelled in the LORS, so I'm kind off waiting for the intrinsic benefit of doing it anyway.
Last but not least, the idea of having an SS API is a good one, which I don't mind using. But we'd have to see when and where to use it. And as it is not out yet (not even specified), I think we should leave assessment out of this discussion for the time being and just keep in mind that sometime an SS implementation might come around and could replace some code that is residing in assessment.
<blockquote> It's important for Assessment not to be tightly bound to other
education-specific packages (or any other vertical app packages for
that matter). Maybe I'm misunderstanding, but it sounds as if that's
what is being suggested in the most recent posts in this thread.
Yes, I think this can be a misunderstanding.
For all the time that I've been dealing with IMS/SCORM specs I haven't been able to find something that says that it is taylored to tackle issues in the education/academia industry please let me know.
As a matter of fact, the biggest adopters/pushers for e-learning specifications aren't necesarily the Universities but the massive software e-learning giants (Saba, SAP, Docent, Click2Learn, etc). Moreover, in the past two or so years that I've been working on this, I haven't been able to find one academic (grad or undergrad) course that complies to the specs. As you can see for the examples I have gather in the demo sites, most of them *are* in fact corporate training packages.
Additionally, the examples use as best practices that basically are aimed to support the usage of the specs, have little to do with academic and education realms, they are heavily corporate training (see Boeing & NETg examples for IMS SS for instance).
In terms of assessment, I really can't see how IMS QTI doesn't fulfil *any* of what you call 'generic' logic for assessment. If you have an example that is not covered by QTI, please do mention it and we can see how we can go about addressing it. But, once you have a closer look to it, you'll see that it does basically most of what you could do with an assessment. Whether the assessment is meant to be directed to engineers, psychologist, environmentalist or financial brokers, the underlying logic for putting together an assessment and its metrics is what IMS QTI addresses here. Not the angle or the industry that it is targeted to.
By no means I want every other specification compliant package in .LRN to revolve around LORSm. That is idiotic. All I'm saying is that all these packages should be taking care of what they are best at and sharing information with others when they have to. Additionaly, it is a great idea for packages to implement their own functionalities if they think that's best. However, if there is functionalities that can be reused/used why not taking advantage of them. I'm trying to push for letting doors open for future integration with others in the future.
For instance, I have no doubts that in the future IMS QTI will use IMS sequencing for deliverying questions (see here: http://www.imsglobal.org/question/qtiv1p2/imsqti_asi_bestv1p2.html#1495764). So while we are investing resources on it, we should take this into account. Future thinking is another good software engineering practice .
Stan, I do understand your concern, but as you get more acquainted with these specs you will see that they don't focus on one particular industry as they are meant to be specifications for interoperability.
<blockquote> I think we should leave assessment out of this
discussion for the time being and just keep in mind that sometime an SS
implementation might come around and could replace some code that is residing
Malte, that is a good idea. Although I would urge you to think on how your assessment implementation deals with sequencing at the moment and how it needs to be design so in the future, when we have a IMS SS engine, it can easily be adapted to use sequences that come from another sources.
Let's try to make some clear statements:
- Support for IMS QTI was never at disposal. If you had the idea that we are not following IMS QTI with the assessment specs than this ia a misunderstanding. Your estimation of IMS QTI supporting all the use cases specified in the Functional Spezifications is doubtful in my eyes, but it does not matter the least as assessment supports QTI specifications.
- The main concern was not that the specifications for IMS are not re-useable in other sectors as well (we use .LRN more outside universities than inside...). It is just that there is a cry for supporting packages that are not even there that scares me.
- In a previous posting I made clear, how assessment should be developed and how it shall in the long term interact with packages that are to be developed. If there is a flaw in this, please give an answer how to circumwent this, given budget and time constraints.
- Simple Sequencing (the reason this whole thread started) is not out at the moment and might not be for some time. Once an API is out, it sounds like a good idea to evaluate this and then implement (parts) of it in assessment (if someone is willing to put money in this).
- I'm not concerned about assessment relying on other packages, if these packages are reliable and out :).
- Question: Why does everyone say assessment has to follow the SS model, but noone talks the other way round: The SS package could as well make use of the functionality written within assessment and add other SS related things on top of it. After all, the sequencing specs (technical) for assessment are already out in the open, so anyone who has an interest in SS and wants to see standards supported and prevent a lot of money invested in sequencing twice, PLEASE take a look at the specs and give feedback. Or, even better, write up a specification for SS that is more readable than IMS specifications and tailored to the OpenACS realities.
- If I understood Matthias correctly he was suggesting that assessment uses the LORS for questions, sections and assessments. I don't see the value in doing this at the moment, but maybe I just do not understand the CP specifications and it's goals.