Test processing happens in a multiple stage process.
  1. The system evaluates the test as good as it can.
  2. The results of this auto-grading are displayed in the evaluation table for the admin (see test specifications)
  3. The test result is stored in the scoring system.
  4. Staff can manually Human Grade the test. This is mandatory for open questions for the test to be completly graded.
  5. The result of the human grading overwrites the auto grading in the scoring system.
Autograding is different for the types of questions the test has. For future versions it should be possible to easily add other types of information that will be autograded. All autograding follow this scheme:
  1. The answer is taken from the respondee response
  2. It is compared with the correct answer
  3. A percentage value is returned
  4. The percentage value is multiplied with the points for the question in the section (see assessment document for more info).
  5. The result will be stored together with a link to the response for the particular question in the scoring system.
  6. Once finished with all the questions, the result for the test is computed and stored with a link to the response in the scoring system.
Autograding is different for each type of question.
  • Multiple Choice
    • All or nothing. In this scenario it will be looked, if all correct answers have been chosen by the respondee and none of the incorrect ones. If this is the case, respondee get's 100%, otherwise nothing.
    • Cumultative. Each answer has a certain percentage associated with it. This can also be negative. For each option the user choose he will get the according percentage. If negative points are allowed, the user will get a negative percentage. In any case, a user can never get more than 100% or less then -100%.
  • Matching question
    • All or nothing: User get's 100% if all matches are correct, 0% otherwise.
    • Equally weigthed: Each match is worth 100/{number of matches} percent. Each correct match will give the according percentage and the end result will be the sum of all correct matches.
    • Allow negative: If we have equally weigthed matches, each correct match adds the match percentage (see above) to the total. Each wrong match distracts the match percentage from the total.
    • Obviously it is only possible to get up to 100% and not less than -100%.
  • Short answer question
    1. For each answerbox the possible answers are selected.
    2. The response is matched with each of the possible answers
      • Equals: Only award the percentage if the the strings match exactly (case senstivity depends on the setting for the question).
      • Contains: If the answer contains exactly the string, points are granted. If you want to give percentages for multiple words, add another answer to the answerbox (so instead of having one answerbox containing "rugby soccer football", have three, one for each word).
      • Regexp: A regular expression will be run on the answer. If the result is 1, grant the percentage.
    3. The sum of all answerbox percentages will be granted to the response. If allow negative is true, even a negative percentage can be the result.
Human grading will display all the questions and answers of response along with the possibility to reevalutate the points and give comments. The header will display the following information:
  • Title of the test
  • Name of the respondee
  • Number of the try / total number of tries
  • Status of the try (finished, unfinished, autograded, human graded (by whom))
  • Start and Endtime for this try
  • Time needed for the try
  • Total number of Points for this test:Integer. Prefilled with the current value for the response.
  • Comment: richtext. Comment for the number of points given. Prefilled with the current version of the comment.
For each question the following will be displayed
  • Question Title.
  • Maximum number of points for this question.
  • Question.
  • New Points: Integer. Prefilled with the current value for the response. This is the possibility for staff to give a different number of points for whatever reason.
  • Comment: richtext. Comment for the number of points given. Prefilled with the current version of the comment.
  • Answer. The answer depends on the question type.
    • Multiple Choice: The answer is made up of all the options, with a correct/wrong statement (in case we have an all or nothing type of multiple choice) or a percentage in front of them (depending on the response) and a small marker that shows which option the respondee picked. The correct / wrong depends whether the respondee has answered correct or wrong for this option (if he picked an option that he should not have picked, this option will get a wrong in front).
    • Matching question: The item on the left side and the picked item are displayed in a connecting manner. A correct / wrong statment will be added depending whether the displayed (and responded) match is correct.
    • Open Question: The answer is displayed as written by the user. Furthermore the correct answer is displayed as well. This should allow the TA to easily come to a conclusion concerning the number of points.
    • Short Answer: For each answerbox the response will be displayed along with the percentage it got and all the correct answers for this answerbox (with percentage). Might be interesting to display regexps here :-).