Forum OpenACS Development: Response to Survey - PreSpec discussion

Collapse
Posted by Samir Joshi on
Past few months, I have been involved in developing an assessment system in J2EE, target user being educational organizations. It seems to me that full-fledged assessment system for educational organizations differs from survey /feedback/poll functionality in the following ways

A. Need to build a repository of questions (question bank)– possibly over period of time, possibly with collaboration of peers from different organization. The individual questions in the repository have attributes such as category, subject, topic, difficulty-level . Compared to survey/poll, there will be many more question-items, plus life-span of each item in question bank will be much longer.

Students can use any number of times for self-evaluation on particular subject or a particular topic within a subject. If question bank is of sufficient size than instructor will not be worried about students taking test repeatedly in order to know all possible questions.

( In this area the IMS QTI (Question and Test Interoperability) standard is very helpful. Essentially, it defines XML schema to enable export/import of questions (along with answers as well as rules for conducting assessment) between different assessment systems. This is essential if instructors from different education organizations are to collaborate and develop huge question bank for a particular subject, without duplicating efforts –even if they use software of different make/brands. I use Castor RDBMS-XML toolkit to automatically map our data model to the standard QTI XML schema – in that sense, QTI compatibility is quite easy to achieve post-implementation. Still, one may consider it during design stage to make sure that commonly used fields are addressed in data model)

B. Greater flexibility in structuring format of assessment. Typical assessments in educational organization are structured as hierarchies of questions, sub-questions and so on. Instructors may want have control over following characteristics:

  • Time (e.g. the whole assessment should be completed within an hour or the test-taker may take unlimited time to answer)
  • Questions asked are predetermined at assesment design time or are selected at assesment execution time matching certain criteria such as

    i. Difficulty level

    ii. Answer to previous questions (adaptive assessment)

    iii. Navigation – whether test-taker can move backward and review previous questions

    iv. Scoring and evaluation – Negative marks for wrong questions ? Different score-weight for different questions. When to declares result ?

  • My question is, are these differences of perception only or do they warrant a different technical implementation from survey ( with some common functionality)?