Forum OpenACS Development: A way forward for regression testing.
There does however seem to be a consensus amoungst most that automated testing (regression testing) is a good thing, and that it should be an integral part of the project as we move forward. I think we all agree that, done well, this could elevate the project and product to another level in terms of quality (in particular, provable quality). Looking long term, this will free up resources from the integration testing effort, allowing us to concentrate on testing new features, whilst maintaining quality and confidence throughout existing components.
The fact that nobody is using the current package actually presents us with an opportunity. Before I put lots of effort into documenting what we already have (see this thread), I would like to open up a discussion to develop any thoughts you might have about what the automated package should provide. I can then take these on board and make the required changes before we get a body of people using it. Some specific things I'd like to know.
- What specifically would make you use such a package.
- Conversely, what would make you avoid such a package.
- What tools and functions would you like to see in the package. This could be anything, but remember that the package might be used to test database API's, Tcl functions, web interfaces....
- How would you like to see automated testing integrated with the general development methodology. This might relate to the current use of the SDM, integration testing, release management etc...
I know I've said this many times before, but I'm really keen to get the ball rolling on this. I think the project is at a juncture in terms testing and release management. Without wishing to relight the recent discussions relating to this (a specific request from Janine), I would definately like to see the automated testing effort be an integral part of any new process.
One thing I've improved so far is to display the whole stack trace when a test fails. It's funny I remember doing the same thing with JUnit when I started using that tool. I'll let you know when I have more results and feedback.
Thanks a lot for this cool package!
I am *really* pleased you've picked this up and started to use it and I'm *even more pleased* you like the package.
Peter (Harper) put a lot of thought and effort into it and its been a really useful tool for us at OpenMSG and I'm delighted to see someone else help take it further.
Thanks for posting up Pete! Hope it helps you now and in the future
Thanks for making improvements to the package. Just so you know, I'm in the middle of making the discussed changes to the bootstrapper. I'll keep you posted through this thread when those changes are available. Do you have commit rights on the acs-automated-testing package? If so, feel free to commit your improvements. Otherwise email me direct and I'll work them into the package.
How have you found using the package? Was it easy to pick up? Which features are you using?
There are a number of features like stubbing and init classes and components (not sure what the latter are) that I haven't used yet. I'll be committing my tests to acs-lang soon so people will have another example to look at.
Pete - seems you are beating me to the bootstrap job I took a stab at it myself but I kind of got stuck. I tried adding tcl_test as a file type (for tcl files with a parent dir of test) to apm_guess_file_type in 30-apm-load-procs.tcl and would then source the tcl files of type tcl_test if acs-automated-testing is installed. My problem - the db procs are not sourced at that stage. Should we maybe source the test tcl files from an acs-automated-testing-init.tcl file instead? I'm a little confused here, I'm not sure I understand the order in which things are sourced.
since I am going to start working on some as well and a good example
would be great.
Have you gat any you've done for a package? Actually we could do some for the Acceptance Test Package Pro and publish both package and tests?
Pete - one issue to consider with sourcing the test procs is that the bootstrapper currently sources all tcl files ending in (procs|init).tcl even if they are under tcl/test. We should be careful here to not source files twice. I am naming my test file *-test.tcl but I don't know if we want to rely on that convention.
However, these are almost a year old now, and I don't know whether they will still work. Should be a good reference for how to use the extra functionlity though. It makes use of stubs, init classes and components.
Respectively, these are:
- Stubs are for temporarily providing your own implementation of a procedure, thus allowing you to drive the flow of control through the function under test
- Init classes. These are chunks of code that a number of test cases may share. You register a testcase against an init class. When a set of tests are run. An init class constructor is called *once* before running all tests, and the destructor is called *once* after all the tests have run. The idea being you could share a init-class that mounts the package under test, then unmounts it, and it only happens once for all tests.
- Components are meant as reusable chunks of test code. Much like test subroutines. I'm not particularly happy with the current implementation, and I think this'll be one area that'll get tidied up as time goes on.
Hmmmm. Hope that's answered all the questions......
The first is to be able to indicate explicitly which procedures a test case covers. I am currently listing tested procedures in the description for the test case. However, I would like to be able to see what the test coverage in the acs-lang tcl API is - which procs are untested? This can certainly be done manually, but still, a covered_procs or tested_procs argument to aa_register_case may make sense.
The other, and more important, idea that occured to me is that it would be nice if acs-automated-testing would somehow integrate with TclWebTest so that aa_register_case could be used to register TclWebTest tests. acs-automated-testing would then serve as the interface to all our tests, HTTP level tests as well as TCL and SQL level tests. Maybe Tilman could comment on how viable that is? I can't see any reason why this wouldn't be straightforward to do.
I think it was more a question of emphasis. The aa-test package was focussed on being a tool for developers who werebuilding from the bottom up, the other in some senses is more top down.
But I agree, the two-as-one would be desirable.