Prior to writing any code, various of us tried to compile a reasonable spec for Assessment.
This effort extended over a year (looking at the version history) from May 2003 until August 2004. During that time, we hoped that the community would contribute ideas about requirements, datamodels, use of core packages like the CR, etc. There was a conspicuous paucity of comment.
Eventually the docs, such as they are, were implemented by a group that needed the package for .LRN. Lots of .LRN-specific stuff was added, and the UI was tailored for .LRN's needs. The datamodel wasn't implemented very closely to the specs, and some things (like Workflow) didn't make it in at all. The implementation really wasn't what I was hoping for, but it did meet the needs of the group that stepped up to the plate and Made It Happen. And all this work was contributed back to CVS where everyone else can view, use, and hopefully improve it.
The above withering evaluation of the package
The existing survey packages are far more sound a base then assessemnt as far as datamodel, basic APIs, and code.
It's not "final quality"...
is probably (well, definitely) true. But that's as much the result of the drought of thought, suggestions, and code review at the point it would have been useful -- during initial development. I don't see any submitted bugs or forum threads about any of these issues over the recent months. How else does code ever reach "final quality"?
More aggravating is hearing about fixes/enhancements that then don't make it back into CVS. This doesn't seem like the way a community development process is supposed to work.
Since I wrote none of the Assessment code, I guess I don't really have a dog in this fight. But if the whole concept was really so ill-advised, the time to have discovered that was prior to a lot of effort on the part of the folks who have written it.