Overview
Along with Data Validation, probably the most vexing problem confronting the Assessment package is how to handle conditional navigation through an Assessment guided by user input. Simple branching has already been accomplished in the "complex survey" package via hinge points defined by responses to single items. But what if branching/skipping needs to depend on combinations of user responses to multiple items? And how does this relate to management of data validation steps? If branching/skipping depends not merely on what combination of "correct" or "in range" data the user submits, but also on combinations of "incorrect" or "out of range" data, how the heck do we do this?
One basic conceptual question is whether Data Validation is a distinct process from Navigation Control or not. Initially we thought it was and that there should be a datamodel and set of procedures for checking user input, the output of which would pipe to a separate navigation datamodel and set of procedures for determining the user's next action. This separation is made (along with quite a few other distinctions/complexities) in the IMS "simple sequencing" model diagrammed below). But to jump the gun a bit, we think that actually it makes sense to combine these two processes into a common "post-submission user input processing" step we'll refer to here as Sequencing.
At this point, we've identified several possible implementation precedents that might be useful as is or with modification:
- the workflow package mostly finished by Lars Pind
- the .LRN curriculum "Sequencing Model" outlined by Staffan Hansson but not yet implemented
- the IMS "Question & Test Interoperability" spec
- a special-purpose solution for Assessment derived from some or all of the above
Here we'll explore and evaluate these alternatives. (NB: this page is definitely under construction!)
Workflow
The applicability the Workflow package to Survey was discussed back in mid-2002 in this thread. At that point, Lars warned against using Workflow for this purpose, mostly because Workflow at that point wasn't done, and partly because Workflow was intended as a solution for a different problem than controlling branching navigation in a survey. In the intervening time, Lars has moved Workflow ahead substantially (maybe nearly to completion? seems likely not, since Petri Nets still aren't implemented), but Workflow still appears to focus on different issues than those directly involved in Assessment. And the Pind's Rule of Five implies that building a new sequencing model specific to the needs of Assessment is preferable to trying to create a "generic" sequencing solution out of whole cloth.
.LRN Sequencing Model
Staffan posted this page last updated in Jan 2003. It is one dang complex model that implements the "simple sequencing" spec from IMS. As Staffan notes, this model is called "simple sequencing" "because it includes a limited number of widely used sequencing behaviors, not because the specification itself is simple." That is a decided understatement.
In an attempt to help disimpact the concepts Staffan has developed, here is a pseudo-quasi-UML/ERD graphic portraying the entities and relationships:
Staffan reports offline that this model hasn't progressed much since this work last January, but his view is that this mechanism is more germane to a different scope than that needed by Assessment per se. That is, this mechanism is designed to control contingent navigation through a curriculum, elements of which include Assessments as well as other entitites. This sequencing model thus probably is a superset of the model needed for contingent navigation through an Assessment. How much overlap there might be between the "intra-Assessment" navigational sequencing and the "inter-Assessment" navigational sequencing probably can't be determined until we build one of each, according to Pind's Rule of Five. Still, attention to this model may reveal useful datamodel insights and operations on the datamodel that we should incorporate into Assessment.
More useful discussion of the "simple sequencing" model and the "question & test interoperability" model (next section) and how they relate to Assessment is in this thread.
Question & Test Interoperability Spec
More directly relevant to our needs in Assessment may be a different part of the IMS spec: the Question and Test Interoperability spec. The current version of Assessment basically utilizes the QTI structures. The Selection and Ordering spec in particular defines a nice schema for grouping and ordering tests against which item responses can be checked to return a logical conclusion whether or not the tests have been met and thus where the user should next be directed.
This QTI spec is expressed in XML and thus uses constructs quite different from those required for the RDBMS implementation we need for an OpenACS package; it needs to be converted to an RDBMS schema much as Staffan has done for the "simple sequencing" spec for .LRN curriculum. These are the components that are expressed in this spec:
- a rule that captures some atomic test evaluating to a logic value: eg "age < 90"
- a grouping mechanism that (duh) groups tests together: eg "(R1 and R1) or (R3 or R4)
- an ordering mechanism that (duh again) orders tests within groups
- a conjunction operator that defines how rules and groups of rules associate: "and", "or", and "not"
We think that this formulation makes a lot of sense (actually it's a pretty obvious one, but that makes it even more sensible). However, it's not entirely complete, we think, and it doesn't provide any explicit direction about how to reconfigure it from its hierarchical XML structure into our RDBMS datamodel.
Simplified Sequencing
OK, there are lots of complex precedents but none make it obvious as to how to proceed here. We trust in the principle of Occam's Razor ("the simplest solution is the best solution") and its modern corollary Pind's Rule of Five ("build five particular solutions before a general one"). Here's a solution that we propose might work.
First, we think that the QTI components nicely capture the essential pieces needed for both Data Validation and Navigation Control (the combination of which we're referring to as Sequencing). But though not explicitly part of the QTI schema, implicitly there is (or should be) another component:
- a destination that defines which is the next item/section/form to be presented to the user based on the evaluation of the first four elements; It appears to us that this could include the optional Data Validation step, in that certain rule evaluation results may product a "no move" destination requiring the user to remain at the current item and perform some additional action (change the result or provide an additional comment/justification)
Next we note that there are two scopes over which Sequencing needs to be handled:
- intra-item: checks pertaining to user responses to a single item
- inter-item : checks pertaining to user responses to more than one item; checks among multiple items will be built up pairwise
So how might we implement this in our datamodel? Consider the following subset of the overall draft Assessment datamodel:
Here is how this might work:
So what parts of this do or don't make sense? |
At this point I haven't yet tried to implement this to see how well this will or won't work empirically, but I'd love feedback about whether this makes sense or whether it stupidly overlooks a far better way to accomplish this.