Forum .LRN Q&A: Re: Assessment items discussion thread

Collapse
Posted by Stan Kaufman on
Let me amplify my email remark that Malte included as "answer 3" above, since it's really more of a question than an "answer".

We need input from two directions:

1. Those people in dotLRN land who are actively implementing "testing" in real curricula: how do you understand "revisions" of such tests? What is the distinction between a modification of the "same" question and a modification that implies that you now have an entirely "new" question? Is there a distinction? Is there a definable transition point? If so, what is it? If not, are we nuts to talk about "revisions"?

2. The OpenACS gurus who best understand the CR: where in our Assessment datamodeling can we best utilize the CR? Between the two polar extremes of making *every* table inherit from cr_items/cr_revisions and *none* of them, what is the "best practice" application to our constructs? Or perhaps more usefully, what are the main conceptual principles we need to consider while we decide what to "stick in the CR" and what not to?

Collapse
Posted by Carl Robert Blesius on
Let me give your first question a try Stan.

<blockquote>Those people in dotLRN land who are actively
implementing "testing" in real curricula: how do you
understand "revisions" of such tests?
</blockquote>

Right now, we revise a test and it effects all people that have taken that instance of the test (e.g. change a question or an answer to a question and all people who have taken the test in the past are effected by the change).

<blockquote>What is the distinction between a modification
of the "same" question and a modification that
implies that you now have an entirely "new" question?
</blockquote>

Let me try to distinguish the two: the modification of the "same" question is the modification of a question that was/is actually used in an exam (e.g. it was discovered that the question and answer pairs where not matched up correctly and once they are corrected the results of this instance of the exam change). A totally "new" question when the content changes independent of a specific instance of an exam (e.g. change of actual knowledge: once it was believed that the primary cause of peptic ulcers was hypersecretion of acid, now Helicobacter pylori is the peptic ulcer star but because this has changed in the books over the past couple of years does not mean that the results of an exam in 1985 are changed)?

<blockquote>Is there a distinction?
</blockquote>

I am not sure. 😊 Do not think the examples I used did a good job of distingushing the two above, because they both could be solved using "revisions" in the content repository (revisions of a single .

<blockquote>Is there a definable transition point?
</blockquote>

Probably.

<blockquote>If so, what is it?
</blockquote>

The point we actually ask the user if they want to create a "new" question?

<blockquote>If not, are we nuts to talk about "revisions"?
</blockquote>

Far from nuts, but keep it up you are getting closer. 😉

So in summary: I tend to want to leave it up to the user (test admin) to define when a "new" question should be created (with warnings). Revisions can and should be used when a question that is in use or has been used in the past is changed.

Collapse
Posted by Malte Sussdorff on
My intake on the revision of tests is as follows and closely related to the insight of Carl.

Each test that has been taken by responders needs to be preserved in the state when the user took it. This means we need to preserve the assesment settings, the section settings and the items once a test goes live.

When it comes down to revisions, I don't think we therefore have the luxury of differentiation. Every version that has gone live needs to be preserved and while we are at it, just preserve everything.

It get's interesting from a user point of view though. What happens if you change an item, that is already in use in sections. My understanding is to send an email to the owners of the sections with a one-click "approve new version" or "stay at old version" functionality. Usually you'd approve the new version if the section is *not* in an active assessment with responses in it (we should give a warning about this, in case) and you agree with the content of the change (especially interesting if the owners of the sections have a different view of the world).

But what happens if an item has been changed in a way that the author thinks it is the same kind of question (and you can calculate statistics across all revisions of the question), but the other owners of sections disagree (stating that the change has made a comparison between responses to an item futile) ?

Another question would be, do we need to store revisions e.g. of item_types or all the other "supporting" tables. Here our denormalization with the adp-snippet comes in handy. As we store the snippet with the item, we always store the representation of a revision of an item to the respondee. This way we can easily reconstruct an assessment for any given point in time.

To give more concrete answers on how to do this for sections and assessments, we'd have to look more deeper into this datamodell along with the inter-item checks and such. We need at least revision the mapping tables (as_item_section_map and as_section_assessment_map).