Forum .LRN Q&A: Re: Assessment items discussion thread

Collapse
Posted by Stan Kaufman on
Hi Matthias, thanks for posting some ideas. A couple responses:

- The hypothesis behind all these docs/discussion is that an Assessment package can be constructed that will have generic applicability -- not merely to educational applications like .LRN but also to lots of other contexts. Therefore, limiting the design to specs from only one constituency, like IMS, contradicts this goal. On the other hand, if it turns out that there are incompatibilities amongst the specs, then the hypothesis will have been disproved, and multiple packages will need to be developed by different teams. I've always had concerns that this might be the case, but I'm still hoping not. But I definitely think we haven't proven this one way or another just yet.

- Maybe (probably) I haven't been adequately clear about the issue I'm addressing regarding versions. Consider this example: say that an investigator (or professor) has a survey (or test) with a question in it like this: "What is the political party of the British PM?" and the question choices (this is a multiple-choice format) are "Tory" "Liberal" "Other".

Say that this survey/test is released and 1000 responses are in the database. Because 90% of the responses are "Other", the investigator/professor decides to sharpen up the responses by adding several more choices: "Liberal Democrat" "Green" "Republican" "Democrat" while retaining the others, including "Other". From this point on, all respondants will have a semantically different question to answer since they have additional options. This means that the section containing this quesiton is semantically different, and ultimately the entire survey/test is semantically different.

So here's the rub: is it so different that it is an entirely *new* survey/test that should be stored as a "clone with modifications" or is it a variant of the original survey/test that should be stored as a revision? This becomes important at the point of data retrieval/analysis: How should the first 1000 responses be handled vis-a-vis all subsequent ones? Very practically, does the system have to map differernt cr_items to "the same assessment" or is there a single cr_item for this assessment but multiple cr_revisions? What is the query that pulls all these data points? How are the "states-of-the-survey/test" tagged in that result set so that anyone analyzing the results can make sense of this?

I hope this makes some sense. If none of these issues are relevant to the IMS spec, then we'll have to go back to the basic question of whether we need to fork this effort.

Collapse
Posted by Matthias Melcher on
Hi Stan,
I am not an IMS expert (I was only worried that no alignment was mentioned) but as far as I understand http://www.imsglobal.org/question/qtiv1p2/imsqti_asi_infov1p2.html#1442265 , IMS/CTI/ASI are more oriented towards reusage and sequencing rather than statistical comparison. Therefore, I would think that in your example, the new wording of the question must be seen as a different thing in the content repository, and hence the resulting assessment will have to be regarded as a new one, as well. If all the other items in the sequence and all the other sequences in the assessment remain the same, the problem of matching old and new answers could perhaps be mitigated but I think such complicated statistical analysis is so different from normal edu usage that it should at least be hidden from the normal edu course admin (to avoid confusing them) and placed in separate UIs. Couldn't the specs be similarly modularized, as well?