- The hypothesis behind all these docs/discussion is that an Assessment package can be constructed that will have generic applicability -- not merely to educational applications like .LRN but also to lots of other contexts. Therefore, limiting the design to specs from only one constituency, like IMS, contradicts this goal. On the other hand, if it turns out that there are incompatibilities amongst the specs, then the hypothesis will have been disproved, and multiple packages will need to be developed by different teams. I've always had concerns that this might be the case, but I'm still hoping not. But I definitely think we haven't proven this one way or another just yet.
- Maybe (probably) I haven't been adequately clear about the issue I'm addressing regarding versions. Consider this example: say that an investigator (or professor) has a survey (or test) with a question in it like this: "What is the political party of the British PM?" and the question choices (this is a multiple-choice format) are "Tory" "Liberal" "Other".
Say that this survey/test is released and 1000 responses are in the database. Because 90% of the responses are "Other", the investigator/professor decides to sharpen up the responses by adding several more choices: "Liberal Democrat" "Green" "Republican" "Democrat" while retaining the others, including "Other". From this point on, all respondants will have a semantically different question to answer since they have additional options. This means that the section containing this quesiton is semantically different, and ultimately the entire survey/test is semantically different.
So here's the rub: is it so different that it is an entirely *new* survey/test that should be stored as a "clone with modifications" or is it a variant of the original survey/test that should be stored as a revision? This becomes important at the point of data retrieval/analysis: How should the first 1000 responses be handled vis-a-vis all subsequent ones? Very practically, does the system have to map differernt cr_items to "the same assessment" or is there a single cr_item for this assessment but multiple cr_revisions? What is the query that pulls all these data points? How are the "states-of-the-survey/test" tagged in that result set so that anyone analyzing the results can make sense of this?
I hope this makes some sense. If none of these issues are relevant to the IMS spec, then we'll have to go back to the basic question of whether we need to fork this effort.