Hi Folks. As many of you will know, we've had a
recent
discussion regarding the use and integration of the acs-automated-testing package with the rest of the OpenACS system. This
package had been available for at least six months, but has seen little or no interest from the community as a whole.
I'm sure in the main part this is due to the distinct lack of documentation, and the much needed integration with the
bootstrapper.
There does however seem to be a consensus amoungst most that automated testing (regression testing) is a good thing, and that it
should be an integral part of the project as we move forward. I think we all agree that, done well, this could elevate the
project and product to another level in terms of quality (in particular, provable quality). Looking long term, this will free
up resources from the integration testing effort, allowing us to concentrate on testing new features, whilst maintaining quality
and confidence throughout existing components.
The fact that nobody is using the current package actually presents us with an opportunity. Before I put lots of effort into
documenting what we already have (see this thread), I would like to
open up a discussion to develop any thoughts you might have about what the automated package should provide. I can then take
these on board and make the required changes before we get a body of people using it. Some specific things I'd like to know.
- What specifically would make you use such a package.
- Conversely, what would make you avoid such a package.
- What tools and functions would you like to see in the package. This could be anything, but remember that the package
might be used to test database API's, Tcl functions, web interfaces....
- How would you like to see automated testing integrated with the general development methodology. This might relate to the
current use of the SDM, integration testing, release management etc...
Of course, any other thoughts, opinions, experiences etc. would be welcome. I'd be very interesting to hear about good and bad
experiences from previous projects. What's worked, what hasn't etc.....
I know I've said this many times before, but I'm really keen to get the ball rolling on this. I think the project is at a
juncture in terms testing and release management. Without wishing to relight the recent discussions relating to
this (a specific request from Janine), I would definately like to see the automated testing effort be an integral part of any new
process.
Cheers,
Pete.