Forum OpenACS Development: 5.2: survey-library requires survey 5.1.4d2, survey is 5.0.1

Request notifications

I hypothesize that many .LRN users who upgrade or install fresh from the 5.2 codebase will be unhappy.

Whoever modified survey-library to require a non-existent version of survey should probably fix this.

Hi Don.

Survey-library uses the contrib version of survey, it hasn't been fully updated and tested to replace the main CVS survey. Survey-library also shouldn't be part of a .LRN install. Its an unfortunate artifact of putting new packages in the main CVS. The work needs to be done to get the latest survey code back in the main CVS too.

Survey and “Complex Survey”. The survey in contrib is a useful package; I refer to it as “complex survey” vs “simple survey”. It allows branching and named variables. It is compatible with survey-reports. But should it replace the survey in the main CVS? Would an organization using .LRN or the current survey be happy if they upgraded and got all these new features or would they suddenly have software that is more complex then they need?

What is the long term vision? I am imagining that both survey packages will be eventually phased on in favor of a series of packages that will all depend on the assessment package but provide different interfaces. This is one of the reasons we are building extensions to survey (e.g. survey-library, survey-reports) as separate packages. We assume that someday we will want to refactor them to use assessment.

Survey-library: The user story behind survey-library is that expert survey builders would create sample surveys and sections of surveys for use in program evaluation. Organizations could then go to the library and copy the sample closest to their needs, modify them, and distribute them. Unfortunately the site never made it to production, for reasons unrelated to this code. This is a cool idea but this package is definitely maturity level 0.

Survey-reports: I sometimes call this a “Mad Libs” generator. The admin creates a survey and a report that uses variables from the survey. Each user fills out the survey and then gets a customized report. It was developed for The Compass for the Kennedy School of Government at Harvard. We also developed some custom includes to do statistics to show students their results vs the rest of the class. You can see screenshots and learn more here: http://www.solutiongrove.com/products/Kennedy%20School%20of%20Government

We have integrated it with open office so that the user can download the report in rtf and edit it in word. PDF would also be an option. We have just used survey-reports on a second project to create a simple resume builder.

Assessment - good idea, but not a package I would recommend touching. It's not scalable, it's not understandable, it forces you to redirect to the assessment package therefore destroying your UI, the admin pages are very hard to understand. It literally has cost about 100 hours of hair pulling just to make work on client project.

There seems that some are promiting using assessment for everything - ie- it should replace the content repository and also be our generic form building tool. It's just not there yet to even think about stuff like that....

>I am imagining that both survey packages will be eventually phased on in favor of a series of packages that will all depend on the assessment package but provide different interfaces.

The existing survey packages are far more sound a base then assessemnt as far as datamodel, basic APIs, and code. The assessment presents the vision of the features we really want. Perhaps the best approach is to evolve the existing survey as a base and add the assessment features.

I've had to run projects with assessement package because it was forces upon me as a requirement. I made a prototype of evolving assessment into embeddable forms so you can actually do what we really want it to do for user registration (that is, take a standard form and allow the user to extend it with custom fields via an admin UI instead of redirecting the user to a assessment module and you have to put all the core registration code in an assessement trigger). After all this, if someone asked me to do a project based on assessment, I'd absolutely refuse. (and have :)

I keep on hearing that "assessment is our evolution" and that we have to go with this. Again, it's just not there yet. Invest your time in polishing the dyanamic data types for the content repository.

As a way to give online test perhaps (though there are scalability problems as you add many questions - it does 4 queries per question). But to evolve it into our web-base form builder/report tool, it would need serious work.

"The existing survey packages are far more sound a base then assessemnt as far as datamodel, basic APIs, and code."

Could you ellaborate why?

"I made a prototype of evolving assessment into embeddable forms so you can actually do what we really want it to do for user registration"

Could you commit this work so we can see how to make assessment work this way? Furthermore, why didn't you use a callback that will hook into contacts if you want to have additional user data? Nothing easier than that, probably a day of work to get it clean.

"Invest your time in polishing the dyanamic data types for the content repository."

Two different intentions. Assessment is supposed to be a general tool for doing assessments, not for arbitrarily extending the CR. Obviously if such a method would exist, we could slim down the code in Assessment, but it is not there.

"As a way to give online test perhaps (though there are scalability problems as you add many questions - it does 4 queries per question)."

This is due to the fact that you have a large flexibilty when it comes down to the presentation of questions and a lot of ways to reuse them. Thats why we usually do heavy cashing :). But improvement and feedback would be highly appreciated.

"Would an organization using .LRN or the current survey be happy if they upgraded and got all these new features or would they suddenly have software that is more complex then they need?"

Provide a simple and "advanced" interface for it. Should not be too hard, after all assessment already has multiple interfaces to work with.

Hi Malte,

>Could you ellaborate why?

It works, it's stable, there are APIs to "show survey" and "process survey" so it was easy to embed in other packages.

>Could you commit this work so we can see how to make assessment work this way?

I couldn't commit because the assessment procs call ad_conn package directly so they didn't when called outside the assessment page. I did a write up (Carl asked me to) and sent it to Don (who might be looking at it depending on a big list of priorities). I'd be happy to send it to you.

.Furthermore, why didn't you use a callback that will hook into contacts if you want to have additional user data?

I didn't design the use of assessments as a way to extend registration. I think though, the don't want any coding...and the callback way you'd have to extend the database,etc. They wanted the end user to be able to add fields whenever they wanted. Also the branching, etc.

>Two different intentions. Assessment is supposed to be a general tool for doing assessments, not for arbitrarily extending the CR. Obviously if such a method would exist, we could slim down the code in Assessment, but it is not there.

Right, so we agree. People are now trying to use assessment as a way as a way of arbitrarily extending the CR.

>This is due to the fact that you have a large flexibilty when it comes down to the presentation of questions and a lot of ways to reuse them. Thats why we usually do heavy cashing :). But improvement and feedback would be highly appreciated.

I had some ideas in the doc I'll send you.

"Would an organization using .LRN or the current survey be happy if they upgraded and got all these new features or would they suddenly have software that is more complex then they need?"

It would probably be to complex, esp. with the current User Interface in assessment as it take 4 steps per question. The features should definitely be paramaterizable and/or have simple and complex view.

Collapse
Posted by Andrew Piskorski on
Tracy, at least to me, code review write ups of that sort sound like something that should preferably be published on openacs.org, rather than just sent to one or two members of the OCT. IMO, the more solid technical review that gets done in the open, the better off OpenACS will be. If there are, say, proprietary client-specific details embedded in the report, then that's good reason to keep it private until you can find the time to do a 2nd draft. But if it's fear of stepping on other developers' toes (aka, politics), well, I think some toe bruising is a cost worth bearing.
Collapse
Posted by Carl Robert Blesius on
I posted a copy here (agree with you Andrew):

http://openacs.org/storage/view/miscellaneous/Assessment-Review.doc

I asked Tracy to do it, because she has looked at the code with fresh eyes for a client project she worked on recently and I am trying to get an idea of what we can do to improve it.

I am working with Don on another project that might allow for some improvements to assessment (if it actually fits into the requirements AND I can convince Don to touch it), but for now any sort of feedback and open discussion on possible improvements/direction would be appreciated.

I can see either way....

It's not client specific.

It's not "final quality" - in other words, I made it work for a basic case, jotted some other notes because Carl asked me to. It's also not something I"m working on...I just passsed it off.

I don't know how things like that get decided, it doesn't matter to me either way as long as people know it's not intended as finished code. Malte, Carl and Don have it and they are welcome to post it.

Prior to writing any code, various of us tried to compile a reasonable spec for Assessment. This effort extended over a year (looking at the version history) from May 2003 until August 2004. During that time, we hoped that the community would contribute ideas about requirements, datamodels, use of core packages like the CR, etc. There was a conspicuous paucity of comment.

Eventually the docs, such as they are, were implemented by a group that needed the package for .LRN. Lots of .LRN-specific stuff was added, and the UI was tailored for .LRN's needs. The datamodel wasn't implemented very closely to the specs, and some things (like Workflow) didn't make it in at all. The implementation really wasn't what I was hoping for, but it did meet the needs of the group that stepped up to the plate and Made It Happen. And all this work was contributed back to CVS where everyone else can view, use, and hopefully improve it.

The above withering evaluation of the package

The existing survey packages are far more sound a base then assessemnt as far as datamodel, basic APIs, and code.

It's not "final quality"...

is probably (well, definitely) true. But that's as much the result of the drought of thought, suggestions, and code review at the point it would have been useful -- during initial development. I don't see any submitted bugs or forum threads about any of these issues over the recent months. How else does code ever reach "final quality"?

More aggravating is hearing about fixes/enhancements that then don't make it back into CVS. This doesn't seem like the way a community development process is supposed to work.

Since I wrote none of the Assessment code, I guess I don't really have a dog in this fight. But if the whole concept was really so ill-advised, the time to have discovered that was prior to a lot of effort on the part of the folks who have written it.

Stan,

People are trying to use assessment for reasons that it wasn't intended for. As an all-purpose form builder and substitue to the CR. That is the major debate. Yeah, it sucks that people take a piece of code, use it for something way outside the intention and then criticize it when it doesn't work that way.

>More aggravating is hearing about fixes/enhancements that then don't make it back into CVS. This doesn't seem like the way a community development process is supposed to work.

The code I did was a prototype and also required some changes to the assessment it's not clear it's ready to make. The code is in the document Carl posted. It's not in CVS because it is not clear that it should be part of the package, not because anything is being "kept back"

Tracy