Forum OpenACS Q&A: Survey Package Expansion Proposal

Collapse
Posted by Matthew Geddert on
Well, here it is, the proposal I said I would write, it took a bit longer than anticipated, and I did not have the time to get other users feedback before posting it here, but I think it will be a decent starting point for adding grading to survey. Although I think integrating with SCORM and LMS is a good idea, I do not think that we should focus on this aspect of survey before implementing an analysis mechanism.

Proposal for Survey Expansion

This document is intended to foster community discussion about the best ways to expand the survey package to make it more fully meet the needs of the community, as expressed in this thread from two weeks ago. Throughout this document I will use the following terms:

    Admin(s) = Survey Administrator
    User(s) = The End User (i.e. the person taking the survey)
The current data model and its shortcomings

The current 4.x based survey module provides a simple and effective way for Admins to collect information from Users and for Users to revise their submissions. It permits Admins to read individual responses submitted by Users and analyze them, which is likely the most effective way to analyze unique answers (i.e. text fields, text areas and file attachments). However, this system leaves much to be desired when it comes to Multiple Choice answer as well as Dates. Although it permits downloading information (through the use of CSV export) to programs such as Microsoft Excel, which will allow the Admin to perform a more sophisticated mathematical analysis of the material, this is not very Admin or User friendly. The methods of data analysis are insufficient for large sites where the information collected could not possibly be reviewed by the Admins (due to the sheer volume), or when one wants to provide immediate and useful feedback to Users about their submissions. We need to improve the Survey package in the following ways:

  • There needs to be a way for Admins to analyze Multiple Choice/Dates survey data without pulling it off the site and using an external program.
  • There needs to be a way for Users to receive an immediate customized response from the system based on information presented.
  • There needs to be a way for submissions to be available to other packages (if this is desired by the site owner).
Sophisticated Analysis of Surveys

In order to think about useful ways to analyzed submitted survey responses, I would like to ask you to consider the following possible uses for the survey module. Although I am sure there are other types of surveys people may want to conduct, I think that you will see that they can be captured by a comprehensive "test grading system":

  1. Graded Surveys in Online Classroom Use (i.e. DotLRN sites)
    In the online classroom, the Admin (i.e. teacher) would administer tests to Users (i.e. students). If the test were a written essay, or a number of short answers to questions, there is currently no feasible way to automatically analyze responses (i.e. give a grade), and the teacher would be forced to read all the responses. Although each response grading_points_default | integer | default null (this is the default number of points given for a question - which multiple choice questions found in Survey_question_choices, grading_points field values would override - this would be a way of saying, in a essay response test, the teacher may want to give 10 points per question, and he/she will assume they get the full point value unless he/she specifies it otherwise... or something like that. must be read by the teacher, there are "tools" the survey package could provide to the teacher to make their job easier. There should be a way for the teacher (i.e. Admin) to provide feedback after having read the responses. This feedback could take two forms, either a simple "grade" (i.e. point value - which could hopefully be automatically added up for all short answers to give the complete grade for that assignment), but also possibly in the form of comments about the Users answers. For example, in response to a students short essay on how all leaders with facial hair are horrible people (Stalin, Hussein, etc.), the teacher might want to comment and tell the student to consider leaders such as Gandhi or Martin Luther King. In having talked to some professors at the school i work for, they also expressed interest in having "predefined" answers for questions, so that they can have a template with 4 or 5 different standard responses which can be customized.

    If the test administered by the teacher is a multiple choice test. We should provide a way to automatically grade the test by cross referencing the users responses with a key. This could be as simple as the response to question number 1 is multiple choice value 5, thus if the User entered the "correct" value return a 1, if not return a 0 and then adding all the scores up to give a total. Although this would provide a "grade", it would be nice to also provide an explanation for incorrect answers. Thus, if the test is automatically graded, it should not only say you missed 7 of 50 questions, but also tell you which questions you missed and give you an explanation for the correct answer. In testing one things that has little to do with Grading but more to do with the design of a test, which would be useful for professors is the ability to write out a short essay for display and then follow it with a section of multiple choice question... although the grading wouldn't change, this has an impact on the way information is displayed. Likewise, it would be nice to assign subcategories for grades (i.e. questions 1-15 are to be part of your "section 1 on the text book grade", whereas questions 16-30 are part of you "section 2 in the text book grade"). Thus a single test (i.e. survey) could provide independent scores for various sections of a test.

    Another service teachers would need is the ability to translate point values into grades. For example, in the United States it is common practice to give an "A" to people with 92% of the total possible or higher, and an "A-" for 90-92%, a "B+" for 88-90%, a "B" for 82-88%, etc. In Germany on the other hand, grades are given in the form of numbers 1-6 (with 6 being the lowest grade). We should have a way of taking the compiled score (which would be a total number of points), and translating this into either a percentage or a grade which is defined by that percentage or the total number of points.

    Then, there are tools for the teach. A Teacher may want to have an overview of the entire test. What he/she care may care about is "what are the points breakdowns for all students" or "which questions were most frequently missed by students" (this way the teacher could discern if a certain question was possibly written poorly - if for example almost everybody missed a particular question). The best/most visually appealing way to present this information is through bar graphs (which are relatively easy to program), one may want to use a pie chart to show grade breakdowns, but I do not know how we could auto generate this in code. If you have a suggestion please let me know.

    Finally, if a teacher were administering an online class... he/she, and or his/her teachers assistants may want to receive notification about a test having been completed. So we should add notification for a survey having been completed. There already appears to be the functionality to notify yourself, but this is not working in postgresql, and it would be good to be able to specify other people as well.

  2. Health Tests (similar to Personality Tests)
    Many websites offer their Users the ability to take health tests, Stan Kaufman (another openacs developer), has written an excellent example of health tests on OpenACS3.x demos of which are available at http://www.cvoutcomes.org. In these surveys one asks a number of questions and then come up with an estimate of risk for certain health concerns. Each question is part of a subsection of the primary survey. So, for example one may ask 10 questions about a persons diet, 10 questions about a person physical activities, 10 questions about a family history of disease, etc. At the end of the survey, it would then respond to these sections and say "it looks like you need to eat a more balanced diet", "you seem to be above average in terms of physical activity so keep it up", and "since your family has a history of concerns with this disease you should be extra cautious and be sure to inform your doctor that they should check for any irregularities". One could also envision a scenerio in which these three responses woudl be given and a summation would also be given, something like "since you are below average in 2 of the 3 categories you are at risk for ________ disease".

    Luckily these customized responses and categorization are equal to the "graded surveys" translation from points to grades. I have suggestions for data model modification to accommodate these systems below.

Feedback for Users

The Admin should have the ability to determine the amount of information returned to the user, and at which point in time this information is returned. In purely multiple choice tests, it would be nice to automatically grade a test, and return the incorrect responses with an explanation of why their response was incorrect. In written tests we would want to take the submission, and then once the teacher has graded the students work, automatically notify the student, either by sending them the teachers responses with grades or sending them an email message encouraging them to come back to the site and look at the grades as well as teachers comments. In Health surveys it would be nice to automatically respond with statements based on scores in subcategories. If one took a personality test (similar to Health survey) would could return statements based on scores in subcategories, but one might also want to respond with a graph that says the average person that took the test on this website had "x" scores in these categories or something similar. None of this would be new in terms of functionality, but it does mean that we need to have a selective way of determining how response are made.

Integration with other packages

Since I am new to OpenACS I am not intimately familiar with the various ways in which packages can be integrated with one another. However, i am certain that the Survey package should not be dependent on other packages like curriculum, or similar. Thus something like a service contract that can be activated and deactivated (possibly though the use of parameters) seems like the way to go. I would like suggestions on the technical work required to implement this.

Data Model Changes

I have thought long and hard about the Data model changes needed, and think i have come up with a simple non-invasive and elegant solution. Please let me know what you think.

In order to facilitate grades we should add the following tables and some columns to preexisting tables.

d survey

d surveys
                     Table "surveys"
       Column       |          Type           | Modifiers
--------------------+-------------------------+-----------
 survey_id          | integer                 | not null
 name               | character varying(4000) | not null
 description        | text                    | not null
 description_html_p | boolean                 | not null
 enabled_p          | boolean                 | not null
 single_response_p  | boolean                 | not null
 editable_p         | boolean                 | not null
 single_section_p   | boolean                 | not null
 type               | character varying(20)   |
 display_type       | character varying(20)   |
 package_id         | integer                 | not null

ADD
 randomize_questions| boolean                 | default 'f'
  (this lets you randomize the display of questions from various 
questions... and 
   bypasses survey_questions sort order and groupings by section. 
this makes it 
   possible in health surveys to ask 100 questions with each section 
dispersed
   throughout the survey, this helps people to not sub-
consciously 'cheat' the
   test by knowing how questions relate to one another and knowing 
that question
   73 actually relates to 74 and thus drawing different conclusions 
about
   the implied meaning then the question objectively asks for - this 
is much more
   difficult to do if the questions "jump" around in relation to 
various sections)



===============



d survey_sections
       Column       |          Type           | Modifiers
--------------------+-------------------------+-----------
 section_id         | integer                 | not null
 survey_id          | integer                 | not null
 name               | character varying(4000) | not null
 description        | text                    | not null
 description_html_p | boolean                 |

ADD
 
 graded_section     | boolean                 | default 'f'
  (if this is 't' the default value in survey_questions is 't' 
otherwise it is 'f')
 
 point_translation  | boolean                 | default 'f'
  (if this is 't' then the system will not just create a sum of 
points, but
   will convert that sum into a comment - like a "A" or "B+" or "you 
need to 
   watch your diet more carefully")


=============

Create table survey_point_translation
   survey_id  -integer -references surveys
   overarching_translation -boolean -default 'f' - (this says if this 
is to be the the
                  comment that sumarizes sub_section comments)
   section_id -integer -references survey_sections
   total_min     -integer   (the minimum number of points needed for 
a certain comment)
   total_max     -integer   (the maximum number of points permitted 
for a certain comment)
   comment      | varchar(4000)

Then to get a comment we will reference the view 
survey_grading_section_totals or
survey_grading_total_score depending on whether or not it is 
the "overarching_translation"
and do a 
   
   select spt.comment from survey_point_translation spt, 
survey_grading_section_totals sgst
                where sgst.total_score > spt.total_min and 
sgst.total_score < spt.total_min



=============


d survey_questions
         Column         |         Type          | Modifiers
------------------------+-----------------------+-----------
 question_id            | integer               | not null
 section_id             | integer               |
 sort_order             | integer               | not null
 question_text          | text                  | not null
 abstract_data_type     | character varying(30) |
 required_p             | boolean               |
 active_p               | boolean               |
 presentation_type      | character varying(20) | not null
 presentation_options   | character varying(50) |
 presentation_alignment | character varying(15) |

ADD

 grading_points_p       | boolean               | default is set by 
survey_sections
      (this tells the system whether or not this is one of the 
questions to be
       graded - if it is, it will wait for input from the teacher on 
text
       and varchar responses but automatically input values from the 
       grading_points field in survey_question_choices)

 grading_comments_p     | boolean               | 
      (this tells the system whether or not there comments are 
permitted
       - it would then look in the newly created 
survey_grading_comments
       table and find the comment to use - or give the Admin a 
selection)

 grading_comments_one_response_p | boolean         |
      (this says whether or not there are multiple pre-defined 
comments
       which can be selected by the teacher)


=============

create table survey_grading_comments
     comment_id          -integer (sequence)
     question_id         -integer (refernces survey_questions)
     comment_sort_order  -integer
     comment_text        -text
        
  (This table holds the comments on survey questions for graded 
surveys, 
   if there is one possible response (i.e. in a multiple choice test, 
if somebody
   gets a question wrong there is a standard response giving 
justification for the
   answer) - as defined by the grading_comments_one_response_p field 
in the table
   survey_questions. If the teacher is grading a short essay test, 
he/she
   may want to have 5 standard responses... these would be defined 
here -
   and something like a radio button list would let them choose a 
standard 
   response, define a new standard response or give a response 
specific
   to that user/question combo)




=============

d survey_question_choices
    Column     |          Type          | Modifiers
---------------+------------------------+-----------
 choice_id     | integer                | not null
 question_id   | integer                | not null
 label         | character varying(500) | not null
 numeric_value | numeric                |
 sort_order    | integer             

ADD

 grading_points| integer                | default null
     (this would be the number of points a certain selection 
automatically gets -
      i.e. if there is only one right answer for a question that one 
would get
      a "1" and the rest would get "0")
 

==============

d survey_question_responses
      Column       |           Type           | Modifiers
-------------------+--------------------------+-----------
 response_id       | integer                  | not null
 question_id       | integer                  | not null
 choice_id         | integer                  |
 boolean_answer    | boolean                  |
 clob_answer       | text                     |
 number_answer     | numeric                  |
 varchar_answer    | text                     |
 date_answer       | timestamptz |
 attachment_answer | integer                  |

ADD

 grading_points    | integer                  | default null
      (this gets filled in from the survey_question_choices 
grading_points field,
       but on text and varchar responses, can be manually entered by 
admin - i.e.
       when grading a essay response to a test, etc.)
 grading_comment_id   | integer              | references 
survey_grading_comments, default null
      (this gets filled with pre-defined admin comments -i.e. when 
responding to a short 
       essay - from the survey)
 grading_custom_comment | text               | default null
      (if the admin wrote a custom comment applicable only to this 
one users response, this comment
       goes here)


Since Survey_Sections already exists in the data model, all we need to do is activate this feature (at least in postgresql it is not possible to create mulitple sections for a survey). So categories seem to be taken care of. we will need to create a number of views like this to create graphs, reports and custom responses to surveys:
Create view survey_grading_section_totals
   SUM of "grading_points" from survey_question_reponses per user per 
question_id where
          question_id is part of survey_section.
   (this would a pretty complicated view - I hope it can be done)

=============

Create view survey_grading_total_score
   SUM of "grading_points" for all subsections of a survey info will 
be pulled off of the
    survey_grading_section_subtotals view which is mentioned above.
 
============

Sophisticated Survey Templates

Given the various ways in which Survey can be used, it would be a good idea to have a number of new templates to deal with this new functionality, and to make it easier to integrate survey into each developers applications. Something like edit-this-page/templates, each of which can be individual selected based on the use of a particular survey is what i had been thinking about.

Collapse
Posted by Matthew Geddert on
I forgot to add another three views that would be need to create graphs:
create view survey_statistics_by_question
    select count(*) of entries grouped by question from
     survey_question_responses, as well as SUM
       of values as grouped by question

Create view survey_statistics_by_section
    select count(*) of entries grouped by section from the view
      survey_grading_sections_total, as well as the sum of all 
      the values per section

Create view survey_statistics
    select count(*) of entries grouped by survey from view
      survey_grading_total_score, as well as the sum of all the 
      values per survey

From these views it should be incredibly easy to create meaningful simple graphs. If you want to graph something more sophisticated you would need to do queries on existing tables...

Collapse
Posted by Dave Bauer on
Regarding templates for surveys:

My plan is to fix the question rendering code to be fully templated using the form builder or ad_form.

Then create form templates that can be selected by the admin. Currently there are 3 display styles. Those three will be rewritten as templates and more can be easily added.

I'll take a look at the data model changes soon.

Matthew: did you look at simple-survey, it has unused code and tables for scored surveys which might be of assistance.

The supporting code was all commented out, and it wasn't needed for Sloan so we did not add it in to Survey.

Collapse
Posted by Stan Kaufman on
Matthew, congratulations on an excellent start! I'd like to add some comments directed at the Big Picture, not at specifics of data modeling or implementation.

In addition to the two applications (graded tests for education; health surveys), there is a third large application (to which I'm directing much of my effort currently) -- generic data capture tools for clinical trials or other domains where the "survey" is intended to guide users through potentially complex data entry steps and generate "clean" data.

I'm not sure whether this third application is qualitatively different or just quantitatively different from the first. While there are "correct" answers in the education setting, in clinical trials, data needs range checking (for numeric data) or string matching (for textual data); this is rather different than simply designating a specific response to be the correct one for the question.

In addition, in clinical trials, a given "case report form" (CRF) may be composed of one or more atomic surveys that may be reused in other CRFs. This isn't exactly the same as defining different "sections" but rather amounts to creating another containing object into which to add individual surveys.

Also, in clinical trials it is essential that the user be able to annotate every response to every question if they choose. Furthermore, any change to the response to any question needs to be saved in a user id/timestamped audit record.

Another common situation is the need to add one or more responses to a question like this: "Enter the following for all hospitalizations since some date: date of admission, primary diagnosis, discharge hematocrit, etc" IE, instead of a question with a single response to be made, what is required is a (usually) dated collection of data needing a UI that allows the user to enter one or more collections of these data. This is similar to entering a line item in an invoice and needs similiar UIs and data modeling.

Next, "skip logic" is a key requirement that probably educational applications don't face (well, traditional ones anyway; all "adaptive testing" like the current SATs etc use it). This is complex since typically the entire questionnaire is already presented to the user, so something like Javascript would be needed to navigate the user through the survey. Otherwise the survey would have to be presented to the user one question at a time. However you implement it, there must be provision for determining which question should be delivered in response to any given user response.

There are probably other important requirements that aren't coming to mind immediately, but you get the drift. The question is: given the differences in these three applications, does it make sense to try to meet all the requirements in a single package or instead create three (er, two, since I agree that the first two function very similarly)? To add a bunch of machinery and columns for range checking and auditing would complexify the educational app, while failing to have that in the survey package would make it useless for the more complex applications.

Some other requirements that probably apply to all these settings is to have robust scheduling mechanisms, such that an admin can specify how frequently (just once, twice, n times) and on what interval (daily, monthly, yearly, etc) a user can respond to the survey. Also, there should be optional mechanisms that email invitations/assignments to the user, reminders when they haven't done the survey as expected, email thanks when they have, and email copies to the admin/teacher/researcher for each of these events.

There also are often two types of users -- "true" responders who are entering their own data/responses, and surrogate responders who are transcribing data into the system from paper forms (yes, this is far, far more common than you'd think). For the latter, more telegraphic, keyboard entry UIs are far preferable, while for the former, point&click UIs work better.

Finally, there are numerous scoring algorithms that should to be accommodated; there certainly are in the health survey setting. There needs to be a way to map one or more questions to a given scale and set how the responses to those questions are to be calculated into the scale's score. This might involve simple addition, multiplication, or may involve normalization into a 0-100 range (Likert scales). Some scales are calculated from other scales (means). Other scales are simply table look-ups (each response is mapped to an arbitrary value that needs to be specified, and a scale score involves adding/multiplying/finding the mean/etc of all these values) -- the SF12 is like this, eg.

Also complicating health surveys (and presumably other non-health surveys that get scored) is how to handle missing responses. If all questions must be answered, then there's no problem -- you reject the response until it's complete. But lots of surveys permit users to omit responses, and then impute values (eg the mean of the responses the user *did* provide) but also specify minimum number of responses for a given score that must be provided or else the scale is not scored.

Anyway, these are some additional requirements to be considered as we expand the survey package.

Collapse
Posted by Dave Bauer on
Stan and Matthew,

This is a great start! Thanks for all the effort.

Stan,

Regarding the line-item entry, this can be modeled with a section that can be answered multiple times. The missing part if the logic that determines how many times it can be answered and when you are done.

There is a general idea that we want branched surveys that offer additional sections based on the answer to one or more questions.

It does look like we might need additional tools to provide validation rules for input data. I think we can use ad_page_contract and form validation filters, many of which are already written. The hard part is presenting a usable UI.

Luke and I discussed much of this when we did the initial design. I will still take a few days to digest all of this and examine the data model.

Thanks for all the ideas, keep them coming.

Collapse
Posted by Stan Kaufman on
It occurs to me that the ecommerce package may have a solution for the line-item requirement, too. I actually haven't ever used it, but I'll have a look.
Collapse
Posted by Matthew Geddert on
Dave and Stan, thanks for your responses. this is great information

Dave, the hard part about using ad_page_contract and validation filters is that they need to be entirely dynamically generated for each survey, subsection and subsurvey and possibly dynamic given what Stan has suggested. I.e. if they answered yes to question 13 they need to also have answered question 15 with a value greater than "x"... - I agree that this will be tricky (though by all means possible). The trick will be in getting a data model that isn't completely bloated... I don't think that the unused code from simple-survey will be useful with the additions Stan has suggested... this goes way beyond simple-surveys grading... though you obviously didn't know any of this before Stan made his comments, so at the time you made the suggestion it was a good one. I'm glad to hear that you have already thought about templating, and what you intend to do seems good to me.

The rest is in response to Stan's concerns.

I am not certain if your third application is qualitatively different or not, and even if it is, I think this should be part of survey and not a separate package. At least in my view, it would nice to have a survey package that can be as simple or complex as you want it at the click of a button. I think the site owner  should be able to hide much of the complexity from survey admins (i.e. health administrators, teachers, etc) through simple parameters - that way, if all you want to do is ask "what time do you go to bed?" or "how many children do you have?" you can shield the plethora of options that could be available so as not to confuse people that only need a "simple" tool. But in the end it seems to me that the things you talk about could certainly have applications in education.

I am not certain about what you are asking in terms of "range checking" and "string matching" could you please elaborate on this (I could take a guess but don't want to thread this bboard with useless info). I would like to note that I am not saying that only one answer would have a certain value... if there are 5 possible responses you could assign 3 points to answer A, 5 points to answer B 1 point to answer C, etc. It isn't just going to be 1 point for the correct one and 0 for others.

as far as including separate CRF forms goes, based on defined responses I think that the data model I proposed could handle this with little modifications. All that would be needed is once a section is completed (and a section could in fact be just one question), we could route the survey through information contained in "survey_point_translation" and have a field in that table that if the boolean saying "include_subsurvey" is activated it would route them to that subservey - and then through once it is finished return them to the primary survey...  there would be a little bit of complexity here, in terms of routing, but I am certain it wouldn't be too hard. A bit more work would be needed if we wanted to relate many subsurveys (like your <i>Enter the following for all hospitalizations since some date: date of admission, primary diagnosis, discharge hematocrit, etc</i> example), to a special question - a more simple example if I understand you correctly would be if you asked "how many children do you have" and the user entered 3, then the system would automatically come back with 3 separate surveys with questions about those kids... skip logic could be done in javascript and would essentially be a different way of using sub-surveys - though I would prefer for it to be done in html. You wouldn't need a new page per question, just a new page per time you want to use skip logic... overall I think this would be cleaner. If we intend to use "skip logic" on a long survey, it would be better to have multiple pages anyways instead of an incredibly long form, that way each time they continue the information they have previously entered is stuffed in the database, and they don't submit 300 question form and accidentally press a wrong button and loose all the information they had painstakingly entered.

Annotating every response just means adding another field to survey_question_responses, that is text and titled user_comment - it could also be the same as grading_custom_comment, unless we want the admins to be able to annotate a response that would be hidden from the user on the same information as the user being able to annotate responses. UserID is already entered into that table, all we would need to do is add a timestamp - also just a simple column add. The audit record is already in place as has been mentioned (though it would be nice if editing a survey, to not re-enter all the info as a new response, but only enter those values that were changed - again, that isn't too hard to do).

Scheduling shouldn't be hard. The various thank you and reminder email messages are certainly useful for the scenarios I had talked about so I certainly think they should be added. I haven't worked with acs-mail or whatever other package does this, but I am under the impression that many of the mechanisms needed to do this are already in place. In terms of UI's I don't know how we could get a "telegraphic" keyboard entry UI other than the normal press tab to move forward and enter to go to the next page. We can start them out on the entry field, much like when you go to google.com it starts you out in the search box, and we should be able to eliminate useless links between entry fields (which would be hit by if you press tab), but I don't know how else to simplify this short of running a java application within the browser (which is a can of worms I don't want to open).

I will have to think about how scoring algorithms can be accommodated into the data model, and am certainly open to suggestions. Do you think this would be limited to addition, subtraction multiplication and division or would more complex forms of math be needed (okay, not complex, but I mean anything beyond 3rd grade math like derivatives, tangents, arcsine, etc.). Comparing to a scale table should be at least close to taken care of by my proposed survey_point_translation table... what we might want to do is abstract scoring into a separate entity and have a table choose to relate to another scoring table although it could have its own... i.e. these tables are not bound directly to a survey. The best way I can think of dealing with missing responses is to relate a percentage score to survey_point_translation - because it would allow certain questions to have been omitted...

That's it for tonight.

Collapse
Posted by Matthew Geddert on
Another thing:

I don't know if this will help others, but since I don't understand the specifics of the health industry Stan is talking about (and I am guessing many others here don't know about it as well), I thought of an example that may be useful for others to think about which I am think is pretty much what Stan is talking about (am I right Stan?). Basically, as I understand it, Stan has suggested adding the ability to dynamically enter the entirety of 1040 tax form (the US individual income tax form) online (with all its schedules, tables per schedule, multiplications, etc)... in thinking about these additions it helps me to translate this complexity to the tax form I fill out each year because I am familiar with it - and it may help you as well.

Collapse
Posted by Dave Bauer on
Service contracts would be limited to interfacing with other openacs packages, so it would not affect the internal workings of survey itself.

I will attempt to write up a summary of the way the branching system would work for multiple sections. If it is designed correctly it should handle most cases. I think, again, that the most complex part will be creating a UI for multiple section surveys with logic attached to the questions that is usable.

Collapse
Posted by Caroline Meeks on
Feedback based on IMS Question and Test Interoperability Glossary of Terms

I'm also doing my homework too and careful reading the IMS standards to see what we might learn from them for this problem. Here is my first pass. I think a little care now in choosing standard terminology now will likely pay off for us when we move towards full IMS compliance. Plus I think it will be useful for all of us to be able to have a glossary to refer to.

  1. Users and Students should be changed to Participant
    this will be helpful as in OACS terms both Admin and participants are users.

  2. Grade should be changed to Score
    It makes more sense for a something like a personality test to be "scored" then "graded".

  3. Can Point Translation be done in a simpler way using "Scoring Formula" and "Weighted Scoring"?

  4. Survey_grading_total_score is this the same as IMS Raw Score? or does it use the concept of Cut Scores? these need to be separately addressed.

  5. grading_comments looks like it should be "Feedback".
    This will help keep it from being confused with our "General Comments" package. On the other hand it might be worth considering using general comments to manage item and assessment feedback rather then creating a new table.

  6. grading_points seems to do the same function as IMS answer_key but an answer_key seems like it would be its own table. We should think over the two approaches. Even if we keep it in the same table answer_key might be a better name for this column.

  7. Branching should be changed to Sequencing
    This makes sense when you consider that sometimes the same survey section might be presented multiple times. Use Case: Evaluating each instructor in a multi-instructor class.
Collapse
Posted by Matthew Geddert on
Caroline, I agree with your naming convention changes. I have changed my original document to reflect the changes mentioned in points 1 and 2. As far as point about point_translation, i don't think that it would be easier to use both "Scoring Formula" and "Weighted Scoring", because the method i proposed takes care of both of these options and at least in my opinion is a much simpler solution for the data model - I am already planning on adding those two grading models you mentioned into the user interface.

Point 4. Survey_grading_total_score is a IMS raw score. The concept of "cut scores" is addressed by the survey_point_translation table... if i understand the IMS document correctly... cut scores basically means translate a raw point total to something non-numeric... and this is what survey_point_translation does... it can accomidate as many levels of cuts as you want (well it is limited by the number of possible raw score integers in the survey).

Point 5. I am fine with changing the name "grading_comments" to "scoring_feedback"... i would be inclined not to use the general comments package. Although comments will be made on the various acs_objects created by the survey package

survey
survey_section
survey_question (we wouldn't need to use this for feedback)
survey_response (this would work with general comments)
they won't be 'general comments' on the first two of these objects because these comments will only pertain to one participant and thus must be bound not only to an object (as all general comments), but also be bound to a single participant (which general comments are not). I need to change the data model i proposed a bit to take this into account it should be ready and posted on the website mentioned at the end of this post by noon PST on oct 24th (thanks for pointing this out). My father, who is a professor, has said he would it would very important to have a "canned response" system, where the prof can pre-define a number of answers and select the one he/she wants easily... instead of having to type a unique entry (this is where the table survey_grading_comments table - which will be renamed survey_scoring_feedback table - comes in, and where general comments would certainly not work).

Point 6, i think grading_points (which i have now renamed to scoring_points) does in fact replace the IMS Answer Key, but as i said before it is more versatile - and it keeps the data model as simple as possible (which having a seperate table for one type of scoring and a different table for another type of scoring would not do). I do not think it should be called "answer_key" because I am hoping that survey is generic enough to work with things like personality tests, or similar (in this case scoring_points would seem like a much better column name, because answer_key would at least to me seem confusing if there were no "right answer" for a non-academic scored survey where the various multiple choice answers provide differing points - none of which is "better" than the other).

point 7. I have no preference for calling it branching or sequencing - this is something Dave suggested so he would have to comment on it.

I have posted the page i wrote up with the naming convention changes i have adapted at a website i still have from when i was in school... i don't know when they will disable my account, but for now it will work: http://www.ocf.berkeley.edu/~geddert/openacs/survey/

Collapse
Posted by Stan Kaufman on
Closing what looks like an open ordered list tag from a couple posts ago...
Collapse
Posted by Stan Kaufman on
Matthew, thx for your excellent observations the last couple of days. Sorry for the delay responding, but I spent a big chunk of yesterday testing the existing survey package for 4.6, and I think that we all need to pitch in to help get its current set of capabilities working well before we go too far toward new functionality.

Anyway, let me respond to several things. By range checking, I mean a text entry field into which the user is supposed to enter a number (not letters) and this number is a continuous variable that should be less/greater than or less/greater than or equal to some values. In clinical settings, this comes up all the time: a hematocrit, for instance, should be greater than 10 and less than 60 and shouldn't be "35abc!@" However, the value could be anything in between, so using a select or radio button format that includes just a few values won't work.

Similarly, by string matching I mean a text field into which the user is supposed to enter a string with a particular form like a telephone number or social security number -- ie only numbers of the form 999-999-9999 eg. This kind of UI "filter" is well worked out in commercial apps like Access and 4th Dimension.

Regarding breaking a form into smaller chunks -- I agree that this is also very important for the reasons you mention, notably the issue of "saving" a user's work periodically. I liket your idea of creating a survey with skip logic such that any question that branches is presented in a form by itself, and any multiple-question forms would be an atomic unit that the user needs to complete. The datamodel for this seems fairly straightforward, but the UI for authoring such a beast is not.

The annotation issue is indeed just as simple as you indicate. However, the audit issue is one I didn't make clear enough. If there are n revisions to a given value, each modification needs to be stored and retrievable along with a timestamp when the revision was made and who made it (for clinical trial purposes anyway; maybe there would be a situation where you'd want to know how many times a student changed an answer on a test and what those changes were, but I can't quite imagine why). So therein arises the need for audit tables and triggers to insert a new row for each modification of the primary question_response table.

Scheduling is indeed not too hard as you say; I wrote such stuff in my 3.2.5 questionnaire module so it *can't* be very hard 😉 However, what is easy to do in the simple, transparent world of 3.2.5 seems very complicated and arcane in the world of 4.x, from what I've seen so far.

As to a keyboard entry UI for trained staff, I don't think that anything like a Java applet is needed. What I meant was simply something like replacing a set of radio buttons with choices 1,2,3,4 with a text field that accepts a typed number 1,2,3 or 4 (but not "a" and not "5").

I've also implemented scoring mechanisms in my 3.2.5 module, and the datamodels for that are fairly simple (a table for each algorithm and a mapping table that connects a scale to an algorithm). The tricky bits are the procs that know what an algorithm means and what to do with question responses passed to it and the admin UIs through which you map the questions in a survey to the scale. The handling of missing values is something that goes into the definition of the algorithm. The lookup table algorithm is one that I haven't actually worked out using this platform (it was easy to come up with an elegant Java solution in a prior generation of tools, but that's a different story 😉

As to the analogy to tax forms, I think that is a fair one, though tax forms are actually much more complicated. However, much is the same -- the need to filter user input, implement skip logic based on user responses, dynamically populate forms based on user information as well as "library" information, etc all matches.

Caroline, "branching" and "sequencing" are not the same as best I can understand what you mean. The former refers to a situation in which a question asks "Was your instructor understandable?" and if the response is "yes" skips over all questions about "why wasn't she understandable". In clinical trials, lots of questions are gender specific so the user should be jumped over questions that don't pertain.

Several people have commented on the importance of creating intuitive UIs for admins to set this stuff up; this is critical if the target users are non-geeks -- which is the whole point IMHO.

Collapse
Posted by defunct defunct on
Yes, Well said Stan

Maybe we should be makign sure the existing package works properly before any more new-survey, stuff bounces around!

Counting the length of this thread, my estimators head tells me that effort in chat could have *easily* got the survey package up-to-scratch if it was spent on bug-fixing and improvement.

As my mum used to say, clean up the toys you've been playing with before you get any more out!

Collapse
Posted by Matthew Geddert on
I have added two new sections to my website on survey expansion titled:
Feedback Within Survey (Technical) 
   and
Binding Surveys to Objects (Technical) 
Please take a look when you get the chance.

Simon, i agree in part that some of the bugs need to be worked out in regular survey before talking about new-survey... my rational behind talking about it first was to determine how much of the current data model could be kept in tact, and how much had to leave... (and it looks at least from what i have proposed that much of it can stay).

Dave, are you the one to ask about getting CVS access to make updates to survey? Or should i do my own developement and send it to you as a tarball when it is done - at which point you would decide what to do with it? My main concern is "bug fixes" as Simon says, and me doing some work on the same thing you are doing work on. How would i present bug fixes to you? (BTW i will be developing only on Postgresql initially - in the long term i might be able to port to oracle as well, but at least initially that wouldn't be the case).

Collapse
Posted by Stan Kaufman on
Good question, Matthew. There are several of us who are able and willing to help with bug fixes here, but it doesn't make sense for us each to fix the same bugs. Dave, do you have things in hand or do you want to partition out the bugs to others?
Collapse
Posted by Dave Bauer on
Stan,

I think I have covered all the existing bugs. If there is anything open in the SDM, feel free to submit a patch. I will review them and apply them if necessary.

Thanks for all the help with this.

I will prepare some comments on the proposals this weekend.

Collapse
Posted by Stan Kaufman on
Great! I'll get the new update from cvs and retest tomorrow morning.
Collapse
Posted by Peter Marklund on
Matthew, Stan, Caroline et al, I just wanted to say that it's wonderful to see such an intense and productive discussion for one of the more important apps in OpenACS!

Simon, you were right in saying that if we want OpenACS to be more than a hackers toolkit we need the kind of QA that you are driving, and we are all very grateful for that effort. However, let's not forget that to take the toolkit to the next level in terms of usability, functionality, and standards compliance we also need discussions such as this one. Coding is great, but a good requirements analysis can be just as valuable, and this is where we need to invite programmers as well as non-programmers.

Collapse
Posted by defunct defunct on
Peter,

If I might skim over your 'slightly' patronising tone for a moment.

I do not need to be reminded about how to take this toolkit to the next level. I am merely reminding (insisting) that we see as much effort in clearing up previous messes before we move too far ahead with generating new ones..

You'll find the QA effort dry up pretty f***in quickly if this isn't the case.

No-ones saying don't discuss, but I'm well aware that playing with new stuff is more fun that fixing the old, and hence I have to play nagging-old-bastard, so no more chastisement please, I just don't take kindly to it.... :o)

(and can I also point out I have many non-tech volunteers on testing doing excellent work)

Collapse
Posted by Peter Marklund on
Simon, I understand your point better now, and it's a good one. It was not my intention to be patronizing, please excuse me if I was! I think we understand eachother now and are in agreement. Let's not raise an argument unnecessarily.
Collapse
Posted by Stan Kaufman on
Dave, I'm at a loss to explain this, but every bug I reported on the acceptance test server http://213.107.207.131:8000/accept/report-view?accept_package_report_id=2811 is still there. I dropped the database, did a complete, clean re-checkout from cvs at openacs.org to my local cvsroot, reinstalled oacs, and I still find the same problems. I don't understand why you and I are finding such different results. Since this is an entirely fresh install from the canonical cvs branch, this can't be a problem with my use of cvs (I presume).

Did you in fact commit to the 4.6 branch of cvs? Did you perhaps instead commit to HEAD? Any other ideas??

I'm away for the next week, so I can't help think this through for a while. I'll check back in after 11/3 and see how things look then.

Collapse
Posted by defunct defunct on
Dave,

If we can get to the bottom of this asap i would be really grateful.
Stan's done a lot of work on this, and I could really use his efforts
elsewhere as well.

It ceraintly sounds like there may be some CVS issue here,
perhaps you could double check where you've committed it?

thanks

Collapse
Posted by Jeff Davis on
You can easily see whats been changed on the HEAD and oacs-4-6 branches at http://dev.openacs.org:8000/cvs/openacs-4/packages/survey/?sortby=date&only_with_tag=HEAD and http://dev.openacs.org:8000/cvs/openacs-4/packages/survey/?sortby=date&only_with_tag=oacs-4-6

I looked at bug 1790 specifically, and Dave had fixed part of it but there was a small mistake in survey_section__remove (which I think I fixed but did not test -- I did submit and commit a patch for it).

Collapse
Posted by Dave Bauer on
Sorry guys.

Looks like there are a few things in survey. I will be going through it this weekend along with 2 other ig projects. News on Monday of progress.

Collapse
Posted by Dave Bauer on
See this thread https://openacs.org/bboard/q-and-a-fetch-msg.tcl?msg_id=0002Qz

which refers to a Survey Builder package for ACS 3.4. It is available for download from http://surveys.crump.ucla.edu/download.

I am looking at the code right now.
It has documentation!

Collapse
Posted by Dave Bauer on
Ok, here is my initial response.

These are the sections I think need to be changed most, but I really want to think about all the data model changes some more.

1) Binding surveys to acs_objects

  This definitely needs to be done with service contracts. We do not want to expost the internals of the survey package to the other packages. I have discovered more applications that might want to use surveys, so we need to carefully decide how to expose the survey functions through service contracts. We do not want the other packages to depend on the data model of survey.

2) Survey Templates

  This is going to be handled by first fully templating the question display using ad_form or the form builder. After that a form template for each display type will be created. These will somehow be registered with the survey package and offered to the admin in a drop-down box.

Besides these things, I want to address the sequencing of survey sections. This seems like an important feature so we want to get it right. There are examples of this in the Survey Builder package built for ACS 3.4 and in ACES which implemeted a branching system.