Forum OpenACS Q&A: Response to Survey Package Expansion Proposal

Posted by Stan Kaufman on
Matthew, thx for your excellent observations the last couple of days. Sorry for the delay responding, but I spent a big chunk of yesterday testing the existing survey package for 4.6, and I think that we all need to pitch in to help get its current set of capabilities working well before we go too far toward new functionality.

Anyway, let me respond to several things. By range checking, I mean a text entry field into which the user is supposed to enter a number (not letters) and this number is a continuous variable that should be less/greater than or less/greater than or equal to some values. In clinical settings, this comes up all the time: a hematocrit, for instance, should be greater than 10 and less than 60 and shouldn't be "35abc!@" However, the value could be anything in between, so using a select or radio button format that includes just a few values won't work.

Similarly, by string matching I mean a text field into which the user is supposed to enter a string with a particular form like a telephone number or social security number -- ie only numbers of the form 999-999-9999 eg. This kind of UI "filter" is well worked out in commercial apps like Access and 4th Dimension.

Regarding breaking a form into smaller chunks -- I agree that this is also very important for the reasons you mention, notably the issue of "saving" a user's work periodically. I liket your idea of creating a survey with skip logic such that any question that branches is presented in a form by itself, and any multiple-question forms would be an atomic unit that the user needs to complete. The datamodel for this seems fairly straightforward, but the UI for authoring such a beast is not.

The annotation issue is indeed just as simple as you indicate. However, the audit issue is one I didn't make clear enough. If there are n revisions to a given value, each modification needs to be stored and retrievable along with a timestamp when the revision was made and who made it (for clinical trial purposes anyway; maybe there would be a situation where you'd want to know how many times a student changed an answer on a test and what those changes were, but I can't quite imagine why). So therein arises the need for audit tables and triggers to insert a new row for each modification of the primary question_response table.

Scheduling is indeed not too hard as you say; I wrote such stuff in my 3.2.5 questionnaire module so it *can't* be very hard 😉 However, what is easy to do in the simple, transparent world of 3.2.5 seems very complicated and arcane in the world of 4.x, from what I've seen so far.

As to a keyboard entry UI for trained staff, I don't think that anything like a Java applet is needed. What I meant was simply something like replacing a set of radio buttons with choices 1,2,3,4 with a text field that accepts a typed number 1,2,3 or 4 (but not "a" and not "5").

I've also implemented scoring mechanisms in my 3.2.5 module, and the datamodels for that are fairly simple (a table for each algorithm and a mapping table that connects a scale to an algorithm). The tricky bits are the procs that know what an algorithm means and what to do with question responses passed to it and the admin UIs through which you map the questions in a survey to the scale. The handling of missing values is something that goes into the definition of the algorithm. The lookup table algorithm is one that I haven't actually worked out using this platform (it was easy to come up with an elegant Java solution in a prior generation of tools, but that's a different story 😉

As to the analogy to tax forms, I think that is a fair one, though tax forms are actually much more complicated. However, much is the same -- the need to filter user input, implement skip logic based on user responses, dynamically populate forms based on user information as well as "library" information, etc all matches.

Caroline, "branching" and "sequencing" are not the same as best I can understand what you mean. The former refers to a situation in which a question asks "Was your instructor understandable?" and if the response is "yes" skips over all questions about "why wasn't she understandable". In clinical trials, lots of questions are gender specific so the user should be jumped over questions that don't pertain.

Several people have commented on the importance of creating intuitive UIs for admins to set this stuff up; this is critical if the target users are non-geeks -- which is the whole point IMHO.