Forum OpenACS Development: Survey - PreSpec discussion

Posted by Caroline Meeks on
Sloan has identified internally the potential for many survey-like packages: simple survey for classes/communities, course evaluation, leadership feedback form, elections, etc. Add to that Berkelee's needs: testing, imbedded polls in online content, etc. and basic elearning assessments needs and you end up with quite an array of features. Many require branching. Most require changes to permissions (who can answer, how many times, who can read answers) and many require various types of notifications (notifying prof when test is taken, sending feedback from to the person being evaluated).

In my research on the Bboards I was struck by the following quote about Workflow from Lars:

A very quick outline of what I envisioned with workflow: I want to let people build packages like the ticket-tracker, like bug tracker, or other packages that involve a workflow as a significant component, to easily build that, and get a superb user interface almost for free. How's that for a goal? :)
Which I could easily use for my vision for Survey as:

A very quick outline of what I envisioned with survey: I want to let people build packages like the poll, test, feedback, simple survey, or other packages that involve asking users questions as a significant component, to easily build that, and get a superb user interface almost for free. How's that for a goal? :)
Sloan is considering how to phase and approach our enhancements to Survey, and though the spec is not yet done we want to post publicly on some high level issues.

A: Do we just hack the current survey module until it works for August then deal with more complicated issues or do we build a base package with greater functionality then implement "refinements" of it to create different functionalities?

Our timeline is that we definitely need "simple survey" by August and we need branching by mid fall.

B: If we do decide to build a solid base functionality the following technical questions come up

  1. Should we base it on Workflow?
  2. Should questions and/or answers be stored in the Content Repository?
  3. How do we manage the "Refinements," i.e., Polls, tests, course evaluations, elections, in such a way that they are easiest to build and maintain and have minimum repeated code.
Our thanks to those who have helped us clarify the issues and come up with suggestions and we encourage people who have been communicating with us by email to post your ideas here too.

Posted by Dave Bauer on
There is this analysis of what a survey package would look like. [1]


I haven't read it in awhile, so I am not sure how closely it will cover your needs.

Posted by David Walker on
This sounds like it might have potential for applications (employment or other) as well.

Workflow does seem like a nice base for this. The applications I work with are extremely subject to branching and require a lot of checking of the data and from what I have seen of workflow it is a nice basis for that kind of system.

One of my requirements is that the system never redirect. So if you finish page 1 and hit submit, the system will perform the required checks and show either page 1 or page 2 depending on whether or not you pass the required checks.

Another issue I run into is being sent an html file and having to break it apart and make it into a dynamic application page so I see the ability to easily template the system as important.
Posted by John Mileham on
At Berklee, we're putting together our requirements for surveys right now (at least for the short term).  Most of our needs aren't very exotic, but we are toying with the idea of a survey tool that can maintain state for partially filled-out surveys.  The goal is to have self assessments with links to the course content (which would likely be implemented as javascript submit links).  The student could then browse the course content freely and return to the survey for completion when ready.  This functionality would be pretty much mandatory for us, as we're making an attempt at a frame- and popup-free site.
Posted by Don Baccus on
First of all, thanks for posting this, Caroline.

Given the relatively tight schedule and the fact that dotLRN is still something of a moving target (though stabilizing rapidly), the fact that you folks will be busy with testing, data migration, getting bugs fixed, etc, it might be wise to just make what exists today work for your August 1 deadline.  While I imagine you intend to farm out the project, you folks are going to have a very busy summer.  In my experience working with Sloan (and other clients) things go best when there's a lot of communication back-and-forth, and it just seems a relatively straightforward first effort would be best.  Meaning communications and coordination won't be overly stressed during what I think will be a fairly stressful time.

Of course I may be wrong.  You may already have testing and data migration in the bag :)  But I do know your August 1 deadline is solid.

The first step towards the larger picture would seem to be an analysis of the current datamodel to see if it is flexible enough to serve as a base for further development.  I haven't looked at OF's changes yet - the basic need is to be able to attach a survey to any object in the system (much as we're talking about with "general ratings") and I know the original datamodel didn't support that.

As far as workflow ... the concept's great but IMO the design goal's not been fully met, yet.  It's important to OpenACS that we do meet that goal but I wouldn't depend on our doing so in this timeframe.  In my experience it's harder to incorporate than I first expected.

Don't take this as a vote against using it - eventually I want to see workflow so easy to use that it we do so almost everywhere.  For now, though, I'd think carefully before deciding either way.  If workflow would help in defining the flow through different forms of surveys then using it is probably a plus.  If you think that there may be some need to make changing that flow relatively easy to customize down the road, then workflow can definitely be a plus if it is used correctly.

Posted by Michael Feldstein on
It's absolutely critical that dotLRN be built to observe eLearning
open standards if it is going to gain wider adoption. In this case,
there are several standards that are relevant. For testing, you'll
want to look at the IMS testing specification
( This may or may
not be something you can bolt on easily later; I'm not the one to
judge. At the ver least, you'll want to look at the way folks who
have been thinking about these problems for quite a while think
the data model needs to be defined.

The issue of branching is more complex. I'm doing something
like the idea that John is suggesting in a course I'm developing
for a client right now. It's called "adaptive testing," and the idea is
that you use a pretest (or a review self-test) to determine which
content the learner needs to see and which content can be
skipped. There is an older standard put out by the AICC that
covers this, but that standard (though still widely used) is
somewhat obsolete. Think of it as analogous to SGML; it's the
big, cumbersome basis upon which leaner, cleaner specs are
being developed.

SCORM (which you can find out about at
should cover things like this but doesn't yet. A new version is due
out next week; that version should inclued the new IMS
lightweight sequencing spec
( I
haven't seen this spec yet but I'll be looking at it soon (with some
help) and will report back to the board when I've done so.
SCORM will be following up next year with a more
comprehensive sequencing standard.

Posted by Patrick McNeill on
Michael -- At Berklee we've decided to follow IMS whereever it seems to be useful.  At the moment, we're looking at the Content Packaging spec for course content, supported with a data model we've layered on top of the content repository.  I haven't read the testing specification yet, but if it's similar, our design should be flexible enough to support importing and exporting quizzes into/from the CR.
Posted by Michael Feldstein on
That's outstanding news, Patrick. I would also recommend that
you keep a close eye on SCORM, which has a lot of traction in
the corporate and governmental sectors. ADL (the sponsor of
SCORM) works very closely with the IMS, but they have their own
thing going too.

IMS and SCORM compliance are hugely important. I don't want to
get this thread too far off-track, but the topic of standards
compliance merits a discussion of its own.

Posted by Don Baccus on
I wasn't even aware of such standards.  Now that I am, yes, standards compliance would seem extremely important.
Posted by Ben Adida on

Standards definitely merit a discussion of their own. I am all for standards. But I want to warn against thinking that IMS/SCORM are the panacea that people are expecting. The standards right now are fairly complex, and I have yet to see two "SCORM-compliant" systems actually talk to one another.

I'm not saying it isn't possible. I'm only saying that a project may easily fail because it pursues a vague goal because this vague goal has been labeled a "standard."

I think Berklee's approach is very good and safe. I suspect dotLRN will slowly move towards this standards compliance "where it makes sense."

Posted by Michael Feldstein on
Posted by Lars Pind on
As the author of Workflow, the thought of basing survey on workflow scares me. Maybe I'm just a sop, but I'd recommend against it.

First, Workflow's a long way away from where I want it to be. The notion of describing some process using Petri Nets (or some other language which translates into Petri Nets), and having stuff happen at each step along the way, is beautiful (I should think so, no?), but right now, due to a number of factors, including the fact that integration between database tables, PL/SQL, web server, and page scripts, is a lot less flexible than could be desired, things aren's as neat as I'd wish they were.

Second, even if it was where I wanted it to be, it wouldn't have anything to do with branching in a survey. I'm all for focusing on differences in use cases, and going for the most optimal user experience for each individual use case. The use case of processing an insurance company claim (which is the original model behind workflow) is very different from a branching survey, and I'm pretty certain that even though I'd created the ideal user experience for the insurance-claim type workflow, it'd have sucked for the survey use case.

That doesn't by any means preclude writing some common infrastructure here. All I'm saying is obeyi Pind's Rule of Five, and focus on creating the most ideal user experience for your particular class of use cases. And if you later find out there's code worth sharing, do some refactoring.


Posted by Ben Adida on
I lean in Lars's direction a bit, although not quite as much as
Pind's Rule of Five. My biggest concerns is that there's huge risk
of a second system effect on survey. Everyone has seen the
current module, and everyone has grand ideas for how it should
work next. There are probably a number of goals between all of

- branching: a huge amount of work for such a small word.

- a generic survey engine: to provide a base for other packages
that are based on a survey-like process (quizzes, etc...).

- more answer types.

- better results processing / correlation.

- ability to link surveys to other objects in the system, make
certain actions dependent upon the completion of a survey.

(and I'm sure there's plenty more)....

There's a good reason to be conservative here. One solid path
might consist of focusing on a clean architecture to start and *
only then* adding features in an iterative manner. Given the
current state of the survey package (horrendous), just rebuilding
a clean architecture will be plenty of work in and of itself.

Posted by Michael Feldstein on

I strongly suggest that you look at the IMS Question and Test Interoperability Specification as the basis of this module:

It offers the following advantages:

  • It's been thought through very carefully by people who do a lot of this sort of thing in both academic and corporate settings
  • If you use it, then any test or test bank created in any other standards-compliant system can be imported into dotLRN and visa versa
  • The API has already been designed and (equally importantly) documented
  • Once the Sequencing specfication is finalized at the end of this year, you'll get branching as well simply by updating the latest versions of the IMS standards, which are explicitly being designed to interoperate while maintaining backwards compatibility as much as possible.

At the very least, somebody should take a look at the specification and see if there's a specific and compelling reason not to adopt it.

Posted by Peter Vessenes on
Hi all,

Caroline, if you're convinced of the need for branching (and I'm with Ben on the amount of work it will be), I think it may be worth at least speccing the datamodel for it now; if you don't, I'd guess it will be a significant amount of work to backport scoring from a branching system to a non-branching one.

Taking another spin on it, The simple survey module is up and running right now on ACS 4x, for instance, you can try it out at (In development, so be gentle..) I believe this is the same module that's downloadable at If you really just need a quick start, this might do the trick, and should integrate with dotlrn pretty easily.

So, I guess my question is: will the branching survey scores, (presumably used for testing?) need to integrate with the simple survey scores that you're rolling out in August? That might be a critical question. If the answer is no, I'd roll out simple survey quickly, and start working now on the branching system, paying extra attention to architectural decisions which will help you write a good admin interface; the admin interface for simple survey is, frankly, only okay. Adding branching support, expecting professors, et al. to use that interface would be pretty terrible.

Thoughts from others?

Don, incidentally, I don't know where your general ratings development is at, but we have a general ratings module, here, which you might want to use. It includes procs and graphics to make friendly ratings graphics. (like 3 stars out of 5).

Posted by Andrew Grumet on
To add a little bit more to the reqs pile, I think the following are
decent usability goals

-- ability save/resume

-- creator can split the survey into mutiple pages

-- progress bar

-- at the end, see how your results compare to others (see also:
commongood demo)

These would be optional, not mandatory.

Posted by Peter Vessenes on
Sorry, broken link..
Posted by Michael Feldstein on
Thanks for the links, Peter. I'll cross-post to the general-ratings
discussion to make sure that Dean (the guy who is taking the
project on) knows about your work.
Posted by David Kuczek on
Malte did branching for simple survey on ACES and ACS 3.4.5+... Don't know how hard it would be to port his code to ybos' 4.x simple survey. Sounds like a nice start though!

Posted by Samir Joshi on
Past few months, I have been involved in developing an assessment system in J2EE, target user being educational organizations. It seems to me that full-fledged assessment system for educational organizations differs from survey /feedback/poll functionality in the following ways

A. Need to build a repository of questions (question bank)– possibly over period of time, possibly with collaboration of peers from different organization. The individual questions in the repository have attributes such as category, subject, topic, difficulty-level . Compared to survey/poll, there will be many more question-items, plus life-span of each item in question bank will be much longer.

Students can use any number of times for self-evaluation on particular subject or a particular topic within a subject. If question bank is of sufficient size than instructor will not be worried about students taking test repeatedly in order to know all possible questions.

( In this area the IMS QTI (Question and Test Interoperability) standard is very helpful. Essentially, it defines XML schema to enable export/import of questions (along with answers as well as rules for conducting assessment) between different assessment systems. This is essential if instructors from different education organizations are to collaborate and develop huge question bank for a particular subject, without duplicating efforts –even if they use software of different make/brands. I use Castor RDBMS-XML toolkit to automatically map our data model to the standard QTI XML schema – in that sense, QTI compatibility is quite easy to achieve post-implementation. Still, one may consider it during design stage to make sure that commonly used fields are addressed in data model)

B. Greater flexibility in structuring format of assessment. Typical assessments in educational organization are structured as hierarchies of questions, sub-questions and so on. Instructors may want have control over following characteristics:

  • Time (e.g. the whole assessment should be completed within an hour or the test-taker may take unlimited time to answer)
  • Questions asked are predetermined at assesment design time or are selected at assesment execution time matching certain criteria such as

    i. Difficulty level

    ii. Answer to previous questions (adaptive assessment)

    iii. Navigation – whether test-taker can move backward and review previous questions

    iv. Scoring and evaluation – Negative marks for wrong questions ? Different score-weight for different questions. When to declares result ?

  • My question is, are these differences of perception only or do they warrant a different technical implementation from survey ( with some common functionality)?

    Posted by Michael Steigman on
    Interesting observations on the distinction between surveys and assesments, Samir. We'd be primarily interested in using such a package for large survey projects and our additions to the package's requirements point to a couple of other areas where differences might come into play - participant management and notification. Here's our short list:

    • Summary page for respondents to review/print answers.
    • Notifications for survey author (for following up with participants who did not respond) and/or a nice interface for author to track respondents.
    • Initial participant notification. It must be easy to notify large participant groups. Anyone using the package for surveys will likely have large groups of participants (who aren't likely to be users on the system) to stick into the DB and notify.
    Also, does anyone have experience with any of the commercial options out there? A brief competitive analysis or discussion of features might be useful.