Forum OpenACS Q&A: New Initiative: Automated Testing

Collapse
Posted by defunct defunct on
Premise: OpenACS Needs Automated Testing

As you probably all know, a new Acceptance Test Cycle is coming up for release 4.6.
This has prompted a discussion on how best to do testing, what the current issues are, and what kind of long term scheme will help us do this in future and improve the overall quality of the toolkit, whilst retaining manageability.

I'm proposing that the underlying solution to all our issues in this area is to begin using automated testing in all new development.

I'm pretty keen to get feedback from the community on this, but past postings and discussions have made it obvious that very few people in the community properly understand the concept, know how to implement it or have used it before.

Most seem to think that its about writing scripts and testing web pages in an automated fashion. Its isn't! (Or at least its not just that)

Therefore, I'll set out what I mean by it, and then put forward what I think should happen next.

Automated Testing Description
Automated Testing is the only way in which a developer can demonstrate his code functions as intended. Testing cannot prove the absence of all bugs, but it can demonstrate the correct behaviour of code.
Code without automated tests is meaningless!. This is an absolute truism. If you can't prove what you claim abut your code, then no-one can have any faith it will work. The only way to demonstrate that it does, is to provide automated tests that can be run locally.
I know that that comment is going to set the cat amongst the pigeons, but whether you like it or not it is the case. Developers often say things like 'I've tested it and its really stable'. But what does that mean? How can I verify that? How does the developer know that? Guesswork?
So you can see the issue.

Ok, so the solution needs to be a method whereby we can do the following:

  • Create automated tests that are unbiased
  • Verify the functioning of code in an automated way
  • Test all elements individually and collectively
  • Prevent code from introducing errors
  • Build-in demonstrable quality
  • Save time during Acceptance/Delivery testing. For example its estimated that a bug fixed at development time is nearly 4 times less expensive in terms of effort, than doing it after acceptance testing.

The Steps in Automated Testing
The following steps describe how to create/perform automated testing. Its important to note that its not a pick-and-choose approach. This works only if *all* of the following are done:

  • An automated test is created for every distinct function/operation of significance.
  • Tests are created 'bottom-up' i,e you write a test for a function before that function is used in a higher function.
  • Tests are always written before the code. Developers must create tests before they write their function. The test therefore defines the operational requirements of the function.
  • Test should be written for every significant operation/possibility the function has, this includes failure cases and wierd stuff.
  • Tests must be capable of running in isolation. i.e. a test must not depend on the existence of another. Essentially this means any test should create the 'system-state' required, perform the test, record the result and then put the system back into its original state. This means no test can affect/corrupt the result of another.
  • Test are evolutionary. Whenever you think of a new thing you could test for, an automated test should be added.
  • A function/code segment, cannot, be accepted and cannot be used or developed further until it passes all its tests. In this way we avoid bug-overrun, where a bug in one function causes side effects in another.
  • Changes to code, must pass all existing tests as well as new ones. If the changes imply a change is needed in the test, this must be done before the code change.
  • Tests must be deliverable as part of the product, to allow end users to examine them and to run them themselves. In this way we also provide regression tests as a by product.

This is the process I am strongly recommending become part of OPenACS governance and future development. but..... I suspect you're about to ask........

Applying A-Testing to Existing Code
Before everyone panics I am not suggesting we retro-fit or create automated tests. The method for doing this is...

  • We accept a cut-off point from which all new development must conform to A-Testing.
  • We only create a-tests when we want to make changes to existing code, creating a test for each new function we create.
  • When a bug is discovered in existing code, then we create a-tests for it, and fix the bug.
In this way we only create the retro-a-tests, as they are required. There is little point in creating them if we don;t think things are failing.

The Automated Test Package
I would never recommend anything I wouldn't do myself. As a company we use this method. It works.. We have less bugs, take less time to develop and create better code. When we've strayed and not done it... we have suffered. Once you start you will NEVER want to go back to non-a-testing!

Why does this work?
There are many reasons, but primarily....

  • Its works 'with' human nature not against.
  • It turns testing into a development exercise, thus appealing to coders and hackers.
  • It saves huge amounts of time.
  • It makes it easier to Acceptance Test releases.
  • It encourages developers to write good code. and Immediately demonstrates when they haven't.
  • It builds self and product confidence.
  • It makes it much easier for disjoint communities such as this to collaborate AND improve quality.
  • Its a LOT more fun than traditional approaches.

The Automated Test Package has been part of OpenACS for a while now (how many of you have used it I wonder?).
It provides a neat tool to manage, organise, run and view the results of automated tests. It also helps with organisation and delivery. It even makes a great 'customer confidence builder'.

Therefore we have the tools, the code, and the method (and the proof it works) so my recommendation is that we adopt this, ASAP.

Comments welcome :o) (but please don't bother telling me that 'testing can't be done in OpenACS community because.. blah blah blah..)

It can be done, we just need to find the right method. So lets figure it out.

Cheers

PS: one of the things I've found is that most people immediately equate this with more work!.. It isn't. You've just been fooling yourself in the past! ;).... when you adopt this method, the effort savings on the really boring stuff... like Acceptance Testing, Release Management, Bug Fixing, live updates, migration scripts etc... are massive. And I for one would rather be coding, than messinmg about.... and here you have it. The coders solution to QA!

Collapse
Posted by Lars Pind on
Applause, applause, applause.

Yes, automated testing is sorely needed. I started an effort towards this more than two years ago, but stumbled. Maybe we can make it happen this time.

I'll make sure we have automated tests for the internationalization work that we're about to do.

/Lars

Collapse
Posted by defunct defunct on
If you can do that for internationalisation that would be fantastic, as a good initial demonstrator to people.!

It would be doubly good if you can use the automated-test package. It may or may not be right, but until someone (other than OM) uses it, its hard for us to know.

Collapse
Posted by Janine Ohmer on
My experience with automated testing hasn't been quite as good as Simon's.  It was brought in at a big software company I used to work for, and ultimately the quality of our QA went down, not up.  The reasons why could fill a novel;  the main ones were a) once an automated test exists, most QA testers will assume the feature has already been tested and give it a cursory glance at best, and b) the quality and coverage of an automated test is only as good as the person who wrote it and the amount of time they had to spend on it.  So we quite often had a skimpy automated test hitting the high points, and very little else being tested.  We in development found that there were a lot of bugs being found at the last minute, or even worse being found by customers, because they weren't caught by the automated tests.

However, my reason for posting is not actually to argue against automated testing.  It obviously works better for Simon than it did for us, and I have no reason to think he can't make a useful initiative out of this.  What he is proposing is somewhat different from my experience anyway, since in our case the QA testers were writing the automated tests themselves.  Whether or not that is better than having the developers do it, from a testing methodology point of view, is an exercise left to the reader. :)

My point is that this community project is somewhat different from what Open-Msg does as a company.  You guys can decide on a strategy and require everyone to abide by it.  IMHO the community can't, or at least shouldn't, do that.  We can make suggestions of what we think best practices would be, and we can make sure that packages that come with automated tests are marked in some way as having been especially well-tested.  But I don't think it's right to say you can't contribute a package unless it includes an automated test, just as I also don't feel we can require all contributed packages to follow the 4.x programming paradigm perfectly.  So I don't think that the idea that this should be *required* is a good one - just make it attractive and hope that people will participate.

Now, Simon, you gave us a slightly breathless :) description of all the benefits of automated testing.  There must be some drawbacks, no matter how small - how about describing those, too?  Right now your story sounds too good to be true, and people aren't likely to buy into it until they know the costs as well as the benefits.

Collapse
Posted by defunct defunct on
Ahh Janine..... :o)

The world is full of doubt!

Ok, it is, in my experience, better that developers create automated tests... providing they create them before the code ;).

I understand your initial paragraph, but forgive me for saying it, but you appear to be describing a company that has piss-poor management and lack-lustre employees....
If people want to deliberately avoid doing the job properly then there's not really much anyone can do about it. Methodology or not!

But..... it does emphasise why it would work better in an OS community... Peer Review. Theres more of it, more people and with differing goals.. i.e. not all the same guys in the same company, equally as bored ;)

Ok, I also disagree that its an OpenMSG specific benefit. We;re just developers like you. We;re not very formal, we don;t produce piles of pointless paper... and we don't deliver stuff we're not proud/motivated by.... I think in that respect we have a *lot* in common with the community.

I definately do not think things like this work in a 'do it if you like' fashion. It has to be mandatory, for there to be any net gain.

After all, you must remember that you can't work against human nature. Nobody wants to do more than they have to. If you make it voluntary, anyone who has made the effort will be hampered by those who haven't... and then no-one bothers...

Hence my earlier posting about why a Governance/Structure needs to be in place... but../...

I understand your point and have an alternative solution.

The core of the OpenACS is common to all, we all need it and its critical that this be quality stuff we can all rely on.

There's also a branch of the community that wants to experiment, try new things, get radical. Thats fine, but I don't want it in the commercial stuff I have to deliver....

So perhaps we have graded submissions... i.e. if you want to add certain stuff, you must provided tests for someone else to run..

If your more interested in runing up 'bleeding-edge' stuff, then that can be got from an 'untested submission' pool...

In reality the main objection to this kind of thing (i.e. automated testing) is usually more to do with the fact that people can't be arsed!!

I really sympathise with that as I'm about the laziest man on the planet, but even I accept you have to sacrifice to make gains (and beleive me OpenACS Acceptance TEesting teaches you that ;).

Collapse
Posted by defunct defunct on
Oh Sorry... you asked about problems with our approach

Well yes there are some...

  • Automated testing means its very difficult for developers to 'ignore' problems that they don't feel like/can solve. This can be frustrating.
  • People who can't code lose their jobs very quickly!
  • It does mean more code is generated, and of course test themselves have bugs. However TCL/ACS is a fairly forgiving environment in this regard as that dreaded compilation step is missing.
  • It can be difficult to create 'state machines' for certain applications. In the case of OACS its not so bad, as a database is always a really good way of preapring a test state.
  • A-Testing tends to keep highlighting things that are wrong/spurious but also things that are just not well implemented. It becomes very easy to spend too much time solving problems that weren't that critical (although lets not discourage excellent code).
  • It can be really difficult to get people to adopt it, because traditional western thinking works along the lines of Socraterial criticism.. i.e. the way to the right answer is constant critisicsm until what your left with is perfect.... a better (more eastern) way to look at things is to Accept, Apply and only then Adapt..

Hope that helps

Simon

Collapse
Posted by Peter Harper on
On the subject, I plan to make some changes (with Don's permission) to the bootstrapper sometime over the next few weeks to better integrate the optional sourcing of automated test scripts at system startup. This is something that's been discussed a number of times over the last 6 - 12 months, but I've never quite got around to doing it. It should allow us to start commiting automated scripts to the CVS without impacting general OpenACS development.

Once the changes are done, I'll post up the details, and will also commit some more comprehensive documentation and examples of real automated test scripts (I wrote some for the news package a long time ago).

Collapse
Posted by defunct defunct on
6-12 months and you've still not done it.... tish tish... ;)

Seriously, thanks Pete. If there's any chance you can look at this before the 15th Sept. (My start line for Acc-Testing) that'd be good.

Incidentally Pete, is the version in OpenACS the latest or do we have a more recent one in our CVS?

Collapse
Posted by Peter Harper on
Ahhh, yes..... Apologies about the slight delay. Other more important things seem to keep getting in the way 😉.

The acs-automated-testing package in the OpenACS CVS is now the master.

The current problem is that we don't have any standard mechanism in place for committing test scripts to individual packages. If we commit them in the standard tcl/xxx-procs.tcl format, then we assume that everyone has the automated testing package installed, which isn't the case. This will result in sourcing errors during bootstrapping etc... What I propose is that we create a "test" directory under the "tcl" directory of the package, and then the bootstrapper will optionally source test scripts if the testing package is installed.

What's the current timeline for the 4.6 and 4.7 releases? It'll depend on the dates as to which release we can get the changes in place for.

Collapse
Posted by defunct defunct on
Currently I'm expecting to begin release testing for OACS 4.6 on 15th Sept... with about two weeks till full release... dunno about 4.7
Collapse
Posted by Lars Pind on
I'd suggest a /test directory at the top level in the package, that is, alongside tcl, www, sql, etc.

My reasoning is that I'd expect the tests to exercise pages and other things besides just the Tcl proc libraries.

If that's not the case, then /tcl/test is fine.

/Lars

Collapse
Posted by defunct defunct on
Lars,

The original intention was that the automated test package would be somehwat integrated with the tclWebTest stuff Tilman did, and that later part would be used for automated 'page hit' checking.. I don't know what that status of that was but perhaps Pete and Tils can provide the answer...

In either case, I actually prefer the /test approach... its just seems a bit neater in either case

Collapse
Posted by Peter Harper on
Yeh, probably right Lars. Long term, these test scripts may well actually start testing the Web interface in addition to the tcl/sql. I'd still like to have a go at getting the tclwebtest stuff integrated with acs-automated-testing. The main issue there is the age-old "package require" problems with AolServer.
Collapse
Posted by Peter Harper on
Great minds think alike 😉
Collapse
Posted by defunct defunct on
Yes Peter they do..... but whats that got to do with you then?

;)