Functional Requirements
The Assessment package needs to provide these functions:
Editing of Assessments
- Manage the structure of Assessments -- the organization of series of questions (called "Items") into Sections (defined logically in terms of branch points and literally in terms of "Items presented together on a page"), along with all other parameters that define the nature and fuction of all Assessment components.
- Create, edit and delete Assessments, the highest level in the structure hierarchy. Configure Assessment attributes:
- Assessment name, description, version notes, instructions, effective dates (start,stop), deployment status (development, testing, deployed, ended), whether it can be shared or cloned, associated logo, etc.
- The composition of an Assessment consisting of one or more Sections, or even other pre-made Assessments.
- The criteria that determine when a given Assessment is complete, derived from completion criteria rolled up from each constituent Section.
- Navigation criteria among Sections -- including default paths, randomized paths, rule-based branching paths responding to user-submitted data, and possibly looping paths.
- Whether the Assessment metadata (structure, composition, sequencing rules etc) can be altered after data collection has begun (scored Assessments may not make any sense if changed midway through use).
- Other measured parameters of how an Assessment gets performed -- total elapsed time, time per Section, time per Item
- Configuration of state transitions of an Assessment, depending on context of its deployment. For instance:
- In education, an Assessment might be: Unbegun, Partially Begun, Submitted, Revised, Finally Submitted, Auto-graded, Final Manually Graded, Reviewed by Student, Reviewed by Student and Teacher Together
- In clinical trials, the process is complex and dependent on whether "double entry" is needed; see this FSM diagram for an illustration.
- Scheduling: number of times user can perform Assessment; whether user can revise a completed Assessment; whether a user can interrupt and resume a given Assessment
- Control of access permissions for all components of the Assessment, including editing of the Assessment itself, access to collected Assessment data, and control of scheduling procedures.
- A "clear" button to wipe all user input from an Assessment.
- A "printer-friendly" version of the Assessment so that it can be printed out for contexts in which users need to complete it on paper and then staff people transcribe the answers into the web system (yes, this actually is an important feature).
- Create, edit, clone and delete Sections -- the atomic grouping unit for Items. Configure Section attributes:
- Section names, descriptions, prompts (textual and graphical information), etc.
- The composition of Items in a Section.
- The formatting of Items in a Section -- vertical or horizontal orientation, grid patterns, column layouts, etc.
- The criteria that determine when a given Section is complete, derived from submitted data rolled up from the constituent Items.
- Item data integrity checks: rules for checking for expected relationships among data submitted from two or more Items. These define what are consistent and acceptable responses (ie if Item A is "zero" then Item B must be "zero" as well for example).
- Navigation criteria among Items within a Section -- including default paths, randomized paths, rule-based branching paths responding to user-submitted data, including possibly looping paths.
- Any time-based attributes (max time allowed for Section, minimum time allowed)
- A "clear" button to clear all user values in a Section.
- Create, edit, clone and delete Items -- the individual "questions" themselves. Configure Item attributes:
- Item data types: integer, numeric, text, boolean, date, or uploaded file
- Item formats: radio buttons, checkboxes, textfields, textareas, selects, file boxes.
- Item values: the label, instructions, feedback text (for use during "grading") etc displayed with the Item either during the subject's performance of the Assessment or the.
- Item designation (a "field code") to include in data reporting
- Item defaults: configure a radio button choice that will be checked when the Assessment first displays, a text that will appear, a date that will be set, etc.
- Item data validation checks: correct data type; range checks for integer and numeric types; regexp matching for text types (eg accept only valid phone numbers) along with optional case-sensitivity during text validation; valid file formats for uploaded files. Note: the designation of "the correct answer" in the educational context of testing is a special case of data validation checks.
Note also: need to support three-value logic regarding the existence of any single Item datum: null value means the Item hasn't been dealt with by responder; "unknown" value means that the Item has been answered but the responder doesn't know value; actual value (of proper type) means that the responder has found and submitted a value.
- Database-derived stock Items (eg, "country widgets", "state widgets", etc).
- Item-specific feedback: configurable text/sound/image that can be returned to user based on user response to Item.
- Any time-based attributes (max time allowed for Item, minimum time allowed).
- Support of combo-box "other" choice in multiple-choice Items (ie, if user selects a radiobutton or checkbox option of "other" then the textbox for typed entry gets read; if user doesn't select that choice, then the textbox is ignored).
- A "clear Item" button for each Item type that can't be directly edited by user.
- Create, edit, clone and delete Item Choices -- the "multiple choices" for radiobutton and checkbox type Items:
- Choice data types: integer, numeric, text, boolean
- Choice formats: horizontal, vertical, grid
- Choice values: labels, instructions, numeric/text encoded values
- Choice-specific feedback: configurable text/sound/image that can be returned to user based on user response. -- either while subject is taking Assessment or later when subject is reviewing the "graded" Assessment.
- Create, edit, clone and delete post-submission Assessment Processing Procedures. Configure:
- Scoring Algorithms: names and arithmetic calculation formulae to operate on submitted data when the form returns to the server. These include standard "percent correct -> letter grade" grading schemes as well as formal algorithms like Likert scoring (conversion of ordinal responses to 0-100 scale scores).
- Names and descriptions of Scales -- the output of Algorithm calculations.
- Mapping of Items (and/or other Scales) to calculate a given Scale Scores.
- Define data retrieval and display alternatives: tabular display in web page tables; tab-delimited (or CSV etc) formats; graphical displays (when appropriate).
- Note: manual "grading by the teacher" is a special case of post-submission Assessment Processing in that no automated processing occurs at all; rather, an admin user (the teacher) retrieves the subject's responses and interacts with the subject's data by in effect annotating it ("This answer is wrong" "You are half right here" etc). Such annotations could be via free text or via choices configured during editing of Items and Choices (as described above).
Note that there are at least three semantically distinct concepts of scoring, each of which the Assessment package should support and have varying levels of importance in different contexts. Consider:
- Questions may have a "correct" answer against which a subject's reponse should be compared, yielding some measure of a "score" for that question varying from completely "wrong" to completely "correct". The package should allow Editors to specify the nature of the scoring continuum for the question, whether it's a percentage scale ("Your response is 62% correct") or a nominal scale ("Your response is Spot-on" "Close but No Cigar" "How did you get into this class??")
- Raw responses to questions may be arithmetically compiled into some form of Scale, which is the real output of the Assessment. This is the case in the health-related quality-of-life measures demo'd here. There is no "correct" answer as such for any subject's responses, but all responses are combined and normalized into a 0-100 scale.
- Scoring may involve summary statistics over multiple responses (one subjects' over time; many subjects' at a single time; etc). Such "scoring" output from the Assessment package pertains to either of the two above notions. This is particularly important in educational settings.
- Create, edit, clone and delete Repositories of Assessments, Sections and Items. Configure:
- Whether a Repository is shareable, and how/with whom.
- Whether a Repository is cloneable, and how/with whom.
- Note: this is the concept of a "Question Catalog" taken to its logical end -- catalogs of all the organizational components in an Assessment. In essence, the Assessment package is an Assessment Catalog. (The CR is our friend here ;-)
- Versioning is a central feature of this repository; multiple "live" versions of any entity should be supported, with attributes (name, version notes, version creation dates, version author, scope -- eg subsite/group/etc) to make it possible to identify, track and select which version of any entity an Assessment editor wants to use.
Scheduling of Assessments
- Create, edit, clone and delete Assessment Schedules. Schedulers will define:
- Start and End Dates for an Assessment
- Number of times a Subject can perform the Assessment (1-n)
- Interval between Assessment completion if Subject can perform it more than once
- Whether anonymous Subjects are allowed
- Text of email to Subjects to Invite, Remind and Thank them for performing Assessment
- Text of email to Staff to Instuct, Remind and Thank them for performing Assessment on a Subject
- Provide these additional functions:
- Support optional "electronic signatures" consisting simply of an additional password field on the form along with an "I attest this is my response" checkbox that the user completes on submission (rejected without the correct password) -- ie authentication only.
- Support optional "digital signatures" consisting of a hash of the user's submitted data, encrypted along with the user's password -- ie authentication + nonrepudiation.
- Perform daily scheduled procedures to look for Subjects and Staff who need to be Invited/Instructed or Reminded to participate.
- Incorporate procedures to send Thanks notifications upon completion of Assessment
- Provide UIs for Subjects and for Staff to show the status of the Assessments they're scheduled to perform -- eg a table that shows expected dates, actual completion dates, etc.
Analysis of Assessments
- Provide UIs to:
- Define time-based, sortable searches of Assessment data (both primary/raw data and calculated Scored data) for tabular and (if appropriate) graphical display
- Define time-based, sortable searches of Assessment data for conversion into configurable file formats for download
- Define specific searches for display of data quality (incomplete assessments, audit trails of changed data values, etc)
Performance of Assessments
- Provide mechanisms to:
- Handle user Login (for non-anonymous studies)
- Determine and display correct UI for type of user (eg kiosk format for patients; keyboard-centric UI for data entry Staff)
- Deliver Section forms to user
- Perform data validation and data integrity checks on form submission, and return any errors flagged within form
- Display confirmation page showing submitted data (if appropriate) along with "Edit this again" or "Yes, Save Data" buttons
- Display additional "electronic signature" field for password and "I certify these data" checkbox if indicated for Assessment
- Process sequence navigation rules based on submitted data and deliver next Section or terminate event as indicated
- Track elapsed time user spends on Assessment tasks -- answering a given question, a section of questions, or the entire Assessment -- and do something with this (we're not entirely sure yet what this should be -- merely record the elapsed time for subsequent analysis, reject over-time submissions, or even forcibly refresh a laggard user's page to "grab the Assessment back")
- Insert appropriate audit records for each data submission, if indicated for Assessment
- Handle indicated email notifications at end of Assessment (to Subject, Staff, Scheduler, or Editor)