Forum OpenACS Development: Revitalizing our testing efforts

Collapse
Posted by Joel Aufrecht on
Here are some things we could do to improve the quality of our releases:
  1. Funding a dedicated test server. Since Furfly already hosts several OpenACS machines and since the test server uses a lot of bandwdith between itself and the cvs server, the ideal situation would be for the community to pay furfly to add a new computer to the cluster.
  2. Maintain a dedicated test server. Aside from minimal uptime and OS support, I wouldn't expect Furfly to do this. Instead, I would rather see another person from the community help me or take over from me. One thing I learned from trying, semi-successfully, to maintain a test server at Collaboraid is that it is a daily job. Despite the automation, various things creep through in a daily automated rebuild that bring sites down for different reasons and sneak past any alarm detection you might have. Peter and Lars and I built a framework for distributed, automated checking, so that we can have a simple central dashboard even if some of the sites are hosted on different machines, and we need to get that working again. So this task is to manually check all test servers every day (over time this could be changed to alert-based checking), troubleshoot any problems, assign tasks to get problems fixed, and restore the test servers to functionality. Every day.
  3. Separate demo and test. We shouldn't use the test servers as demo servers.
  4. Add more API tests. We need to start adding tests to every package. At a minimum, we should be testing all API functions. Some developers are adding tests for some API calls, but we can do a lot more.
  5. Add web tests. Whether with tclwebtest or with another tool, we need automated scripts that navigate the default website and make sure things really work.
Collapse
Posted by Malte Sussdorff on
I totally agree and would like to add one point.

- Create an easy way for anyone to record test scenarios on the testserver.

As for the rest, I will comment / ask questions at a later time.

Collapse
Posted by Matthias Melcher on
Some more aspects to affirm Joel's suggestions.
On the translation server we have observed that the
need for people to see certain pages, seems to be
so big that many of them registered for translator
accounts even though they did not want to contribute
to any locale.

On the other hand, it is important for translators to see
ALL parts of the UI - even the side wide admin pages
(unless we decide to exempt them entirely from i18n). For
such SWA tests, the former testing server affordance was
great for translators who could "login" anonymously as
SWA, see a specific message string in context, switch to
the translation server to edit it in the list view,
and be confident that nothing could be seriously spoilt
because everything was flushed for the next day.

The comfort of anonymously logging in as SWA, however,
does not, IMO, justify the danger that this possibility
is abused such that it breaks very early each day. I
would not mind to apply for an admin account of this
server for translation and testing purposes and still
would not insist in setting my personal favorite
password there. Just make the user administration as
convenient for the hoster as possible.

Another argument for setting up such a server again,
is that bug reporting and bug triage would be simplified
again if the "how to reproduce this bug" could again
be answered by "log in to test.openacs.org as ...,
then ...".

Collapse
Posted by Eduardo Pérez on
Add web tests. Whether with tclwebtest or with another tool,
we need automated scripts that navigate the default website
and make sure things really work.

I started working on WEB regression testing using perl's WWW::Mechanize:
https://openacs.org/forums/message-view?message_id=242120
It works quite well and sometimes I find a regression because it's run almost everyday. There's not many tests in any package but assessment. It just installs openacs and assessment from scratch.

Please continue discussion of WEB testing in this thread:
https://openacs.org/forums/message-view?message_id=242120

Collapse
Posted by Eduardo Pérez on
We are setting up a test server at UC3M. We'd like to know about the configuration requeriments of the test server. We have a P4 3,2GHz 2GB RAM Box ready to be dedicated for tests. It's running Debian Unstable with AOLServer 4.0.10 and PostgreSQL 7.4.7 We were thinking about running automated tests continuously and bother someone if there's a regression. There would be also a demo instance there for people to try to break things (updated every day or with the period we agree) Adding shell accounts would be complicated as this is in an university network, so all the administration would have to be done by us.

I just need to know how you suggest to run the tests continuously and how to bother people when there's a regression.

Collapse
Posted by Jeff Davis on
The hardware is perfectly fine. Are you going to host oracle as well? I think we want 4 instances: stable and head for pg and oracle but they won't get a lot of traffic so mostly it's just a matter of having enough memory to run pg, oracle, and 4 servers on one box without it falling over.

I think it would be hard to attribute regression failures to individual committers but something like a daily or weekly summary people could subscribe to might be useful. I think that should be a job for the test server which aggregates test results and not for the test instances themselves.

Collapse
Posted by Eduardo Pérez on
Are you going to host oracle as well?

Yes, but we have to install it first.
What oracle version do you recommend?

I think we want 4 instances: stable and head for pg and oracle but they won't get a lot of traffic so mostly it's just a matter of having enough memory to run pg, oracle, and 4 servers on one box without it falling over.

OK, let's have that 4 demo instances.

I think it would be hard to attribute regression failures to individual committers but something like a daily or weekly summary people could subscribe to might be useful. I think that should be a job for the test server which aggregates test results and not for the test instances themselves.

I think regressions are better fixed as fast as possible.

I'll try to think how to integrate everything needed but if you could send me the scripts that managed the old test servers or any other tests script it would be easier for me.

Let's start changing the alias test.openacs.org to strauss.gast.it.uc3m.es
(If you think it's a good idea, at least I do)

Collapse
Posted by Jeff Davis on
On oracle, I think we should be testing vs. oracle 9i.

I think regressions are better fixed as fast as possible.

Of course I agree, and the implication that I think otherwise
is insulting given how much time I spend doing QA and bug
fixing. I just don't think blasting a tremendous amount
of mail out indescriminately is really a great way
to get there. Certainly if it was generating a lot
of spurious email for me I would simply bounce it as would
most others I imagine.

Let's start changing the alias test.openacs.org to
strauss.gast.it.uc3m.es (If you think it's a good idea,
at least I do)

I think test.openacs.org is going to stay at furfly, since
we want to run multiple test instances and aggregate results
centrally (and it needs to be somewhere where we can give
people shell access).

Collapse
Posted by Eduardo Pérez on
On oracle, I think we should be testing vs. oracle 9i.
What would be the problem if we start using oracle 10g?
(We are having many problems installing oracle 9i)
Collapse
Posted by Nick Carroll on
Why not have an RSS feed that publishes the regressions? Those willing to subscribe can do so. That way you don't have to bother anyone in particular.
Collapse
Posted by Joel Aufrecht on
To set up a test server, see For each server that will be monitored. Step 4 of that section describes how to get the results xml to a file. What we need to do is instead get that xml to the new test.openacs.org, once it exists. I am thinking that a scheduled rsync or scp would work.

Regarding perl web testing, please see my response in the same thread. regression testing is nice; regression testing that uses the already working scheme for reporting and aggregating results through the central test.openacs.org is nicer.

Collapse
Posted by Eduardo Pérez on
We already have a working installation of Oracle 10g.
And I'm reviewing the current openacs.org test/install scripts to automate the installation and tests.
Collapse
Posted by Eduardo Pérez on
We have a testing server at:
http://strauss.gast.it.uc3m.es/

The automatic reinstallation is still not much automated but you can already try it to see the latest features or bugs in oacs-5-1 and HEAD, both in PostgreSQL and Oracle.

Any suggestion welcome!

Collapse
Posted by Ryan Gallimore on
Are you using tclwebtest? If so, how did you get past this error:

----- START: tcl-api-test.test at [15/Jun/2006:23:20:12] -----

##############################
#
# tcl-api-test.test: Login the site wide admin
#
##############################

--- do_request for http://test.viscousmedia.com:8001/register/logout
tcl-api-test.test: Failed to connect to server with error "can't read "http_status": no such variable" - giving up
tcl-api-test.test: can't read "http_status": no such variable
tcl-api-test.test: *** Tcl TRACE ***
tcl-api-test.test: can't read "http_status": no such variable
    while executing
"log "http status: >>$http_status<<""
    (procedure "::tclwebtest::do_request" line 82)
    invoked from within
"::tclwebtest::do_request $page_url"
    (procedure "::twt::do_request" line 1)
    invoked from within
"::twt::do_request "[::twt::config::server_url]/register/logout""
    (procedure "::twt::user::logout" line 2)
    invoked from within
"::twt::user::logout"
    (procedure "::twt::user::login" line 3)
    invoked from within
"::twt::user::login [::twt::config::admin_email]"
    (procedure "::twt::user::login_site_wide_admin" line 3)
    invoked from within
"::twt::user::login_site_wide_admin"
tcl-api-test.test: The response body is:

Test failed: can't read "http_status": no such variable
    while executing
"error "Test failed: $result""
    invoked from within
"if { [catch {
    # Source procedures
    source tcl/test-procs.tcl

    # Test Execution START

    ::twt::log_section "Login the site wide admin"
  ..."
    (file "tcl-api-test.test" line 1)
    invoked from within
"source tcl-api-test.test"
    ("uplevel" body line 3)
    invoked from within
"uplevel $uplevel $to_eval "
in "tcl-api-test.test" line 1:
if { [catch {

-----  FAILED: tcl-api-test.test (took 0s)               -----

DURATION: 0
1 of 1 tests FAILED:
tcl-api-test.test

I am not running daemontools, and testing oacs-5-2. Thanks.
Collapse
Posted by Ryan Gallimore on
The above error was resolved when the server was run from port 80.
Collapse
Posted by Ryan Gallimore on
The server comes up and an xml report file is generated, but it does not include the output from the tests on the packages. I think the script is not importing the acs-automated-testing parameters. I have output but am unable to paste in the xml output here. How do I do this? And why isn't the install script picking up the parameters? Cheers.