User Requirements

Use Cases

We can identify several types of administrative users and end-users for the General Assessment package. Here is a brief synopsis of their responsibilities in this package.

  • Package-level Administrator: assigns permissions to other users for administrative roles.

  • Editor: has permissions to create, edit, delete and organize in repositories Assessments, Sections and Items (defined as per IMG, meaning what you'd expect but further detailed in the Design document). This includes defining Item formats, configuring data validation and data integrity checks, configuring scoring mechanisms, defining sequencing/navigation parameters, etc.

    Editors could thus be teachers in schools, principal investigators or biostatisticians in clinical trials, creative designers in advertising firms, etc.

  • Scheduler: has permissions to assign, schedule or otherwise map a given Assessment or set of Assessments to a specific set of subjects, students or other data entry personnel. These actions potentially will involve interfacing with other Workflow management tools (e.g. an "Enrollment" package that would handle creation of new Parties (aka clinical trial subjects) in the database.

    Schedulers could also be teachers, curriculum designers, site coordinators in clinical trials, etc.

  • Analyst: has permissions to search, sort, review and download data collected via Assessments.

    Analysts could be teachers, principals, principal investigators, biostatisticians, auditors, etc.

  • Subject: has permissions to complete an Assessment providing her own responses or information. This would be a Student, for instance, completing a test in an educational setting, or a Patient completing a health-related quality-of-life instrument to track her health status. Subjects need appropriate UIs depending on Item formats and technological prowess of the Subject -- kiosk "one-question-at-a-time" formats, for example. May or may not get immediate feedback about data submitted.

    Subjects could be students, consumers, or patients.

  • Data Entry Staff: has permissions to create, edit and delete data for or about the "real" Subject. Needs UIs to speed the actions of this trained individual and support "save and resume" operations. Data entry procedures used by Staff must capture the identity if both the "real" subject and the Staff person entering the data -- for audit trails and other data security and authentication functions. Data entry staff need robust data validation and integrity checks with optional, immediate data verification steps and electronic signatures at final submission. (Many of the tight-sphinctered requirements for FDA submissions center around mechanisms encountered here: to prove exactly who created any datum, when, whether it is a correct value, whether anyone has looked at it or edited it and when, etc etc...)

    Staff could be site coordinators in clinical trials, insurance adjustors, accountants, tax preparation staff, etc.

Functional Requirements

The above use cases imply that the Assessment package should provide these functions:

Editing

  • Create, edit and delete Assessments, the highest level in the structure hierarchy. Editors will define:

    • Assessment names, descriptions and prompts (textual and graphical information), etc.
    • The composition of an Assessment consisting of one or more Sections, or even other pre-made Assessments
    • The criteria that determine when a given Assessment is complete, derived from completion criteria rolled up from each constituent Section
    • Navigation criteria among Sections -- including default paths, randomized paths, rule-based branching paths responding to user-submitted data, and possibly looping paths
    • Whether the Assessment metadata (structure, composition, sequencing rules etc) can be altered after data collection has begun (scored Assessments may not make any sense if changed midway through use)

  • Create, edit, clone and delete Sections -- the atomic grouping unit for Items. Editors will define:

    • Section names, descriptions, prompts (textual and graphical information), etc.
    • The composition of Items in a Section
    • The formatting of Items in a Section -- vertical or horizontal orientation, grid patterns
    • The criteria that determine when a given Section is complete, derived from submitted data rolled up from the constituent Items
    • Item data integrity checks: rules for checking for expected relationships among data submitted from two or more Items. These define what are consistent and acceptable responses (ie if Item A is "zero" then Item B must be "zero" as well for example)

  • Create, edit, clone and delete Items -- the individual "questions" themselves. Editors will define:

    • Item data types: integer, numeric, text, boolean, date, or uploaded file
    • Item formats: radio buttons, checkboxes, textfields, textareas, selects, file boxes
    • Item data validation checks: correct data type; range checks for integer and numeric types; regexp matching for text types; valid file formats for uploaded files

  • Create, edit, clone and delete Scoring Algorithms. Editors will define:

    • Names and arithmetic calculation formulae for Algorithms
    • Names and descriptions of Scales -- the entity upon which an Algorithm operates
    • Mapping of Items (and/or other Scales) to calculate a given Scale Score

    Note that there are at least three semantically distinct concepts of scoring, each of which the Assessment package should support (can anyone think of others?). Consider:

    • Questions may have a "correct" answer against which a subject's reponse should be compared, yielding some measure of a "score" for that question varying from completely "wrong" to completely "correct". The package should allow Editors to specify the nature of the scoring continuum for the question, whether it's a percentage scale ("Your response is 62% correct") or a nominal scale ("Your response is Spot-on" "Close but No Cigar" "How did you get into this class??")
    • Raw responses to questions may be arithmetically compiled into some form of Scale, which is the real output of the Assessment. This is the case in the health-related quality-of-life measures demo'd here. There is no "correct" answer as such for any subject's responses, but all responses are combined and normalized into a 0-100 scale.
    • Scoring may involve summary statistics over multiple responses (one subjects' over time; many subjects' at a single time; etc). Such "scoring" output from the Assessment package pertains to either of the two above notions.

  • Create, edit, clone and delete Repositories of Assessments, Sections and Items. Editors will define:

    • Whether a Repository is shareable by other Editors, and which Editors
    • Whether a Repository is cloneable

Scheduling

  • Create, edit, clone and delete Assessment Schedules. Schedulers will define:

    • Start and End Dates for an Assessment
    • Number of times a Subject can perform the Assessment (1-n)
    • Interval between Assessment completion if Subject can perform it more than once
    • Whether anonymous Subjects are allowed
    • Text of email to Subjects to Invite, Remind and Thank them for performing Assessment
    • Text of email to Staff to Instuct, Remind and Thank them for performing Assessment on a Subject

  • Provide these additional functions:

    • Perform daily scheduled procedures to look for Subjects and Staff who need to be Invited/Instructed or Reminded to participate.
    • Incorporate procedures to send Thanks notifications upon completion of Assessment
    • Provide UIs for Subjects and for Staff to show the status of the Assessments they're scheduled to perform -- eg a table that shows expected dates, actual completion dates, etc.

Analysis

  • Provide UIs to:

    • Define time-based, sortable searches of Assessment data (both primary/raw data and calculated Scored data) for tabular and (if appropriate) graphical display
    • Define time-based, sortable searches of Assessment data for conversion into configurable file formats for download
    • Define specific searches for display of data quality (incomplete assessments, audit trails of changed data values, etc)

Assessment Performance by Subjects and Staff

  • Provide mechanisms to:

    • Handle user Login (for non-anonymous studies)
    • Determine and display correct UI for type of user (eg kiosk format for patients; keyboard-centric UI for data entry Staff)
    • Deliver Section forms to user
    • Perform data validation and data integrity checks on form submission, and return any errors flagged within form
    • Display confirmation page showing submitted data (if appropriate) along with "Edit this again" or "Yes, Save Data" buttons
    • Display additional "electronic signature" field for password and "I certify these data" checkbox if indicated for Assessment
    • Process sequence navigation rules based on submitted data and deliver next Section or terminate event as indicated
    • Insert appropriate audit records for each data submission, if indicated for Assessment
    • Handle indicated email notifications at end of Assessment (to Subject, Staff, Scheduler, or Editor)