0.00%
Search · Index

Weblog Page

Showing 131 - 140 of 230 Postings (summary)

How to package and release an OpenACS Package

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

In this example, we are packaging and releasing myfirstpackage as version 1.0.0, which is compatible with OpenACS 5.0.x.

  1. Update the version number, release date, and package maturity of your package in the APM.

  2. Make sure all changes are committed.

  3. Tag the updated work.:

    cd /var/lib/aolserver/$OPENACS_SERVICE_NAME/packages/myfirstpackage
    cvs tag myfirstpackages-1-0-0-final
    cvs tag -F openacs-5-0-compat
    

Done. The package will be added to the repository automatically. If the correct version does not show up within 24 hours, ask for help on the OpenACS.org development forum.

Release Version Numbering

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Ron Henderson, Revised by Joel Aufrecht

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

OpenACS version numbers help identify at a high-level what is in a particular release and what has changed since the last release.

A "version number" is really just a string of the form:

major.minor.dot[ milestone ]

  • A major number change indicates a fundamental change in the architecture of the system, e.g. OpenACS 3 to ACS 4. A major change is required if core backwards compatibility is broken, if upgrade is non-trivial, or if the platform changes substantially.

  • A minor change represents the addition of new functionality or changed UI.

  • A dot holds only bug fixes and security patches. Dot releases are always recommended and safe.

  • A milestone marker indicates the state of the release:

    • d, for development, means the release is in active development and is not in its intended released form.

    • a, for alpha, means new development is complete and code checkins are frozen. Alpha builds should work well enough to be testable.

    • b, for beta, means most severe bugs are fixed and end users can start trying the release.

    • Release Candidate builds (rc) are believed to meet all of the criteria for release and can be installed on test instances of production systems.

    • Final releases have no milestone marker. (Exception: In CVS, they are tagged with -final to differentiate them from branch tags.)

    Milestone markers are numbered: d1, d2, ..., a1, b1, rc1, etc.

A complete sequence of milestones between two releases:

5.0.0
5.0.0rc2
5.0.0rc1
5.0.0b4
5.0.0b1
5.0.0a4
5.0.0a3
5.0.0a1
5.0.0d1
4.6.3

Version numbers are also recorded in the CVS repository so that the code tree can be restored to the exact state it was in for a particular release. To translate between a distribution tar file (acs-3.2.2.tar.gz) and a CVS tag, just swap '.' for '-'.The entire release history of the toolkit is recorded in the tags for the top-level readme.txt file:

> cvs log readme.txt
RCS file: /usr/local/cvsroot/acs/readme.txt,v
Working file: readme.txt
head: 3.1
branch:
locks: strict
access list:
symbolic names:
	acs-4-0: 3.1.0.8
	acs-3-2-2-R20000412: 3.1
	acs-3-2-1-R20000327: 3.1
	acs-3-2-0-R20000317: 3.1
	acs-3-2-beta: 3.1
	acs-3-2: 3.1.0.4
	acs-3-1-5-R20000304: 1.7.2.2
	acs-3-1-4-R20000228: 1.7.2.2
	acs-3-1-3-R20000220: 1.7.2.2
	acs-3-1-2-R20000213: 1.7.2.1
	acs-3-1-1-R20000205: 1.7.2.1
	acs-3-1-0-R20000204: 1.7
	acs-3-1-beta: 1.7
	acs-3-1-alpha: 1.7
	acs-3-1: 1.7.0.2
	v24: 1.5
	v23: 1.4
	start: 1.1.1.1
	arsdigita: 1.1.1
keyword substitution: kv
total revisions: 13;	selected revisions: 13
description:
...

In the future, OpenACS packages should follow this same convention on version numbers.

So what distinguishes an alpha release from a beta release? Or from a production release? We follow a specific set of rules for how OpenACS makes the transition from one state of maturity to the next. These rules are fine-tuned with each release; an example is 5.0.0 Milestones and Milestone Criteria

Each package has a maturity level. Maturity level is recorded in the .info file for each major-minor release of OpenACS, and is set to the appropriate value for that release of the package.

    <version ...>
        <provides .../>
        <requires .../>
        <maturity>1</maturity>
        <callbacks>
            ...
    
  • Level -1: Incompatible. This package is not supported for this platform and should not be expected to work.

  • Level 0: New Submission. This is the default for packages that do not have maturity explicitly set, and for new contributions. The only criterion for level 0 is that at least one person asserts that it works on a given platform.

  • Level 1: Immature. Has no open priority 1 or priority 2 bugs. Has been installed by at least 10? different people, including 1 core developer. Has been available in a stable release for at least 1 month. Has API documentation.

  • Level 2: Mature. Same as Level 1, plus has install guide and user documentation; no serious deviations from general coding practices; no namespace conflicts with existing level 2 packages.

  • Level 3: Mature and Standard. Same as level 2, plus meets published coding standards; is fully internationalized; available on both supported databases.

Database upgrade scripts must be named very precisely in order for the Package Manager to run the correct script at the correct time.

  1. Upgrade scripts should be named /packages/myfirstpackage/sql/postgresql/upgrade/upgrade-OLDVERSION-NEWVERSION.sql

  2. If the version you are working on is a later version than the current released version, OLDVERSION should be the current version. The current version is package version in the APM and in /packages/myfirstpackage/myfirstpackage.info. So if forums is at 2.0.1, OLDVERSION should be 2.0.1d1. Note that this means that new version development that includes an upgrade must start at d2, not d1.

  3. If you are working on a pre-release version of a package, use the current package version as OLDVERSION. Increment the package version as appropriate (see above) and use the new version as NEWVERSION. For example, if you are working on 2.0.1d3, make it 2.0.1d4 and use upgrade-2.0.1d3-2.0.1d4.sql.

  4. Database upgrades should be confined to development releases, not alpha or beta releases.

  5. Never use a final release number as a NEWVERSION. If you do, then it is impossible to add any more database upgrades without incrementing the overall package version.

  6. Use only the d, a, and b letters in OLDVERSION and NEWVERSION. rc is not supported by OpenACS APM.

  7. The distance from OLDVERSION to NEWVERSION should never span a release. For example if we had a bug fix in acs-kernel on 5.1.0 you wouldn't want a file upgrade-5.0.4-5.1.0d1.sql since if you subsequently need to provide a 5.0.4-5.0.5 upgrade you will have to rename the 5.0.4-5.1.0 upgrade since you can't have upgrades which overlap like that. Instead, use upgrade-5.1.0d1-5.1.0d2.sql

Set up Log Analysis Reports

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

Analog is a program with processes webserver access logs, performs DNS lookup, and outputs HTML reports. Analog should already be installed. A modified configuration file is included in the OpenACS tarball.

  1. [root src]# su - $OPENACS_SERVICE_NAME
    [$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ cd /var/lib/aolserver/$OPENACS_SERVICE_NAME
    [$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ mkdir www/log
    [$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ cp -r /usr/share/analog-5.32/images www/log/
    [$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ 
    su - $OPENACS_SERVICE_NAME
    cd /var/lib/aolserver/$OPENACS_SERVICE_NAME
    cp /var/lib/aolserver/$OPENACS_SERVICE_NAME/packages/acs-core-docs/www/files/analog.cfg.txt etc/analog.cfg
    mkdir www/log
    cp -r /usr/share/analog-5.32/images www/log/
    

    Edit /var/lib/aolserver/$OPENACS_SERVICE_NAME/etc/analog.cfg and change the variable in HOSTNAME "[my organisation]" to reflect your website title. If you don't want the traffic log to be publicly visible, change OUTFILE /var/lib/aolserver/$OPENACS_SERVICE_NAME/www/log/traffic.html to use a private directory. You'll also need to edit all instances of service0 to your $OPENACS_SERVICE_NAME.

  2. Run it.

    [$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ /usr/share/analog-5.32/analog -G -g/var/lib/aolserver/$OPENACS_SERVICE_NAME/etc/analog.cfg
    /usr/share/analog-5.32/analog: analog version 5.32/Unix
    /usr/share/analog-5.32/analog: Warning F: Failed to open DNS input file
      /home/$OPENACS_SERVICE_NAME/dnscache: ignoring it
      (For help on all errors and warnings, see docs/errors.html)
    /usr/share/analog-5.32/analog: Warning R: Turning off empty Search Word Report
    [$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$

    Verify that it works by browing to http://yourserver.test:8000/log/traffic.html

  3. Automate this by creating a file in /etc/cron.daily.

    [$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ exit
    logout
    
    [root root]# emacs /etc/cron.daily/analog
    

    Put this into the file:

    #!/bin/sh
    
    /usr/share/analog-5.32/analog -G -g/var/lib/aolserver/$OPENACS_SERVICE_NAME/etc/analog.cfg
    [root root]# chmod 755 /etc/cron.daily/analog
    

    Test it by running the script.

    [root root]# sh /etc/cron.daily/analog
    

    Browse to http://yourserver.test/log/traffic.html

Internationalization

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Peter Marklund and Lars Pind

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

Webtest

Created by Anett Szabo, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

API testing is only part of testing your package - it doesn't test the code in our adp/tcl pairs. For this, we can use TCLWebtest (see sourceforge).

TclWebtest is primarily for testing user interface and acceptance testing. It is a tool to write automated tests for web applications. It provides a simple API for issuing http requests, dealing with the result and assume specific response values, while taking care of the details such as redirects and cookies.
It has some basic html parsing functionality, to provide access to elements of the result html page that are needed for testing (mainly links and forms).

  • TCLWebtest provides a library of functions (see command reference) that make it easy to call a page through HTTP, examine the results, and drive forms. TCLwebtest's functions overlap slightly with acs-automated-testing; see the example provided for one approach on integrating them.
  • TCLWebtest tries to minimize the effort to write tests by implicitely assuming specific conditions whenever it makes sense. For example it always expects the server to return http codes other than 404 or 500, unless otherwise specified.
  • The assertion procedures are targeted at test writers who want to make sure the behaviour of their web applications stays the same, without caring for style or minor wording changes. In the example below, it is just assumed that there is a link with the text "login" on the first page, that clicking on it results in a page with at least one form with at least two text-entry fields on it, and that submitting the form with the specified values results in a page that contains the "logged in" text.
  • TCLWebtest should be suitable for testing larger chains of user interaction on a web application, for example a full ecommerce ordering session. tclwebtest could visit an ecommerce site as anonymous user, add some products to its shopping cart, check out the cart, register itself as user and enter a test address etc. The test script could also include the administration part of the interaction, by explicitely logging in as site admin, reviewing and processing the order, nuking the test user etc.
  • TCLWebtest must be installed for to work. Since automated testing uses it, it should be part of every OpenACS installation. Note that TCLwebtest is installed automatically by Malte's install script.

Hint:

In order to simplify the generation of tclwebtest scripts the  Webtest-Recorder extension (TwtR) for Firefox is available see http://www.km.co.at/km/twtr This module is a plugin for Firefox. It is used to generate/edit a tclwebtest script which can be used later for regression testing without the need of a browser. There is a certain overlap of the application range between selenium and TwtR. This plugin was developed by Åsmund Realfsen for regression/load testing of the assessment module.

 


A typical script for tclwebtest looks like this:

set SERVER "testserver"
do_request "http://$SERVER/sometesturl/"
assert text "some text"

link follow "login"

field fill "testuser"
field fill "testpassword"
form submit

assert text "you are logged in as testuser"
    
This script can be saved in a file, e.g. login.test, and executed with ./tclwebtest login.test. The script itself is tcl, so you can do powerful things with only a few commands.

 

 http://cvs.openacs.org/cvs/openacs-4/etc/install/tcl/twt-procs.tcl?rev=1.18

Command Reference:

 

Here are some guidelines on how to write automated tests with TCLWebtest. It is a joy to work with automated testing once you get the hang of it. We will use the "myfirstpackage" as an example.

Create the directory that will contain the test script and edit the script file. The directory location and file name are standards which are recognized by the automated testing package:

[$OPENACS_SERVICE_NAME www]$ mkdir /var/lib/aolserver/$OPENACS_SERVICE_NAME/packages/myfirstpackage/tcl/test
[$OPENACS_SERVICE_NAME www]$ cd /var/lib/aolserver/$OPENACS_SERVICE_NAME/packages/myfirstpackage/tcl/test

[$OPENACS_SERVICE_NAME test]$ emacs myfirstpackages-procs.tcl

Write the tests. This is obviously the big step :) The script should first call ad_library like any normal -procs.tcl file:

ad_library {
    ...
}

To create a test case you call aa_register_case test_case_name.. Once you've created the test case you start writing the needed logic. We'll use the tutorial package, "myfirstpackage," as an example. Let's say you just wrote an API for adding and deleting notes in the notes packages and wanted to test that. You'd probably want to write a test that first creates a note, then verifies that it was inserted, then perhaps deletes it again, and finally verifies that it is gone.

Naturally this means you'll be adding a lot of bogus data to the database, which you're not really interested in having there. To avoid this I usually do two things. I always put all my test code inside a call to aa_run_with_teardown which basically means that all the inserts, deletes, and updates will be rolled back once the test has been executed. A very useful feature. Instead of inserting bogus data like: set name "Simon", I tend to generate a random script in order avoid inserting a value that's already in the database:

set name [ad_generate_random_string]

Here's how the test case looks so far:

aa_register_case mfp_basic_test {
    My test
} {
    aa_run_with_teardown  -rollback  -test_code  {

       }
}

Now look at the actual test code. That's the code that goes inside -test_code {}. We want to implement test case API-001, "Given an object id from API-001, invoke mfp::note::get. Proc should return the specific word in the title."

      set name [ad_generate_random_string]
      set new_id [mfp::note::add -title $name]
      aa_true "Note add succeeded" [exists_and_not_null new_id]

To test our simple case, we must load the test file into the system (just as with the /tcl file in the basic tutorial, since the file didn't exist when the system started, the system doesn't know about it.) To make this file take effect, go to the APM and choose "Reload changed" for "MyFirstPackage". Since we'll be changing it frequently, select "watch this file" on the next page. This will cause the system to check this file every time any page is requested, which is bad for production systems but convenient for developing. We can also add some aa_register_case flags to make it easier to run the test. The -procs flag, which indicates which procs are tested by this test case, makes it easier to find procs in your package that aren't tested at all. The -cats flag, setting categories, makes it easier to control which tests to run. The smoke test setting means that this is a basic test case that can and should be run any time you are doing any test. (a definition of "smoke test")

Once the file is loaded, go to ACS Automated Testing and click on myfirstpackage. You should see your test case. Run it and examine the results.

 

Example

Now we can add the rest of the API tests, including a test with deliberately bad data. The complete test looks like:

ad_library {
    Test cases for my first package.
}

 


    
aa_register_case -cats {smoke api} -procs {mfp::note::add mfp::note::get mfp::note::delete} mfp_basic_test { A simple test that adds, retrieves, and deletes a record. } { aa_run_with_teardown -rollback -test_code { set name [ad_generate_random_string] set new_id [mfp::note::add -title $name] aa_true "Note add succeeded" [exists_and_not_null new_id] # Now check that the item exists mfp::note::get -item_id $new_id -array note_array aa_true "Note contains correct title" [string equal $note_array(title) $name] # Now check, if titel got the value of name    mfp::note::delete -item_id $new_id                            set get_again [catch {mfp::note::get -item_id $new_id -array note_array}]                 aa_false "After deleting a note, retrieving it fails" [expr $get_again == 0]             }     }
            
aa_register_case  -cats {api}  -procs {mfp::note::add mfp::note::get mfp::note::delete}  mfp_bad_data_test  {
        A simple test that adds, retrieves, and deletes a record, using some tricky data.
    } {
        aa_run_with_teardown  -rollback  -test_code  {
                set name {-Bad [BAD] \077 { $Bad}}    
                #Now name becomes this very unusual value: -Bad [BAD] \077 { $Bad}
                append name [ad_generate_random_string]
                set new_id [mfp::note::add -title $name]     
                #Now new_id becomes the value of the solution of proceduer add with starting argument $name as -title
                aa_true "Note add succeeded" [exists_and_not_null new_id]
                #Now test that new_id exists
                mfp::note::get -item_id $new_id -array note_array
                aa_true "Note contains correct title" [string equal $note_array(title) $name]
                aa_log "Title is $name"
                mfp::note::delete -item_id $new_id

                set get_again [catch {mfp::note::get -item_id $new_id -array note_array}]
                aa_false "After deleting a note, retrieving it fails" [expr $get_again == 0]
            }
    }  


aa_register_case
-cats {web smoke}
-libraries tclwebtest
mfp_web_basic_test
{
A simple tclwebtest test case for the tutorial demo package.

@author Peter Marklund
} {
# we need to get a user_id here so that it's available throughout
# this proc
set user_id [db_nextval acs_object_id_seq]

set note_title [ad_generate_random_string]

# NOTE: Never use the aa_run_with_teardown with the rollback switch
# when running Tclwebtest tests since this will put the test code in
# a transaction and changes won't be visible across HTTP requests.

aa_run_with_teardown -test_code {

#-------------------------------------------------------------
# Login
#-------------------------------------------------------------

# Make a site-wide admin user for this test
# We use an admin to avoid permission issues
array set user_info [twt::user::create -admin -user_id $user_id]

# Login the user
twt::user::login $user_info(email) $user_info(password)

#-------------------------------------------------------------
# New Note
#-------------------------------------------------------------

# Request note-edit page
set package_uri [apm_package_url_from_key myfirstpackage]
set edit_uri "${package_uri}note-edit"
aa_log "[twt::server_url]$edit_uri"
twt::do_request "[twt::server_url]$edit_uri"

# Submit a new note

tclwebtest::form find ~n note
tclwebtest::field find ~n title
tclwebtest::field fill $note_title
tclwebtest::form submit

#-------------------------------------------------------------
# Retrieve note
#-------------------------------------------------------------

# Request index page and verify that note is in listing
tclwebtest::do_request $package_uri
aa_true "New note with title \"$note_title\" is found in index page"
[string match "*${note_title}*" [tclwebtest::response body]]

#-------------------------------------------------------------
# Delete Note
#-------------------------------------------------------------
# Delete all notes

# Three options to delete the note
# 1) go directly to the database to get the id
# 2) require an API function that takes name and returns ID
# 3) screen-scrape for the ID
# all options are problematic. We'll do #1 in this example:

set note_id [db_string get_note_id_from_name "
select item_id
from cr_items
where name = :note_title
and content_type = 'mfp_note'
" -default 0]

aa_log "Deleting note with id $note_id"

set delete_uri "${package_uri}note-delete?item_id=${note_id}"
twt::do_request $delete_uri

# Request index page and verify that note is in listing
tclwebtest::do_request $package_uri
aa_true "Note with title \"$note_title\" is not found in index page after deletion."
![string match "*${note_title}*" [tclwebtest::response body]]

} -teardown_code {

twt::user::delete -user_id $user_id
}
}


 

Distributing upgrades of your package

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

by Jade Rubick

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

The OpenACS Package Repository builds a list of packages that can be installed on OpenACS installations, and can be used by administrators to update their packages. If you are a package developer, there are a couple of steps you need to take in order to release a new version of your package.

For the sake of this example, let's assume you are the package owner of the notes package. It is currently at version 1.5, and you are planning on releasing version 1.6. It is also located in OpenACS's CVS.

To release your package:

cd /path/to/notes
cvs commit -m "Update package to version 1.6."
cvs tag notes-1-6-final
cvs tag -F openacs-5-1-compat

Of course, make sure you write upgrade scripts (the section called “Writing upgrade scripts”)

Programming with AOLserver

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Michael Yoon, Jon Salz and Lars Pind.

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

When using AOLserver, remember that there are effectively two types of global namespace, not one:

  1. Server-global: As you'd expect, there is only one server-global namespace per server, and variables set within it can be accessed by any Tcl code running subsequently, in any of the server's threads. To set/get server-global variables, use AOLserver 3's nsv API (which supersedes ns_share from the pre-3.0 API).

  2. Script-global: Each Tcl script (ADP, Tcl page, registered proc, filter, etc.) executing within an AOLserver thread has its own global namespace. Any variable set in the top level of a script is, by definition, script-global, meaning that it is accessible only by subsequent code in the same script and only for the duration of the current script execution.

The Tcl built-in command global accesses script-global, not server-global, variables from within a procedure. This distinction is important to understand in order to use global correctly when programming AOLserver.

Also, AOLserver purges all script-global variables in a thread (i.e., Tcl interpreter) between HTTP requests. If it didn't, that would affect (and complicate) our use of script-global variables dramatically, which would then be better described as thread-global variables. Given AOLserver's behaviour, however, "script-global" is a more appropriate term.

ns_schedule_proc and ad_schedule_proc each take a -thread flag to cause a scheduled procedure to run asychronously, in its own thread. It almost always seems like a good idea to specify this switch, but there's a problem.

It turns out that whenever a task scheduled with ns_schedule_proc -thread or ad_schedule_proc -thread t is run, AOLserver creates a brand new thread and a brand new interpreter, and reinitializes the procedure table (essentially, loads all procedures that were created during server initialization into the new interpreter). This happens every time the task is executed - and it is a very expensive process that should not be taken lightly!

The moral: if you have a lightweight scheduled procedure which runs frequently, don't use the -thread switch.

Note also that thread is initialized with a copy of what was installed during server startup, so if the procedure table have changed since startup (e.g. using the APM watch facility), that will not be reflected in the scheduled thread.

The return command in Tcl returns control to the caller procedure. This definition allows nested procedures to work properly. However, this definition also means that nested procedures cannot use return to end an entire thread. This situation is most common in exception conditions that can be triggered from inside a procedure e.g., a permission denied exception. At this point, the procedure that detects invalid permission wants to write an error message to the user, and completely abort execution of the caller thread. return doesn't work, because the procedure may be nested several levels deep. We therefore use ad_script_abort to abort the remainder of the thread. Note that using return instead of ad_script_abort may raise some security issues: an attacker could call a page that performed some DML statement, pass in some arguments, and get a permission denied error -- but the DML statement would still be executed because the thread was not stopped. Note that return -code return can be used in circumstances where the procedure will only be called from two levels deep.

Many functions have a single return value. For instance, empty_string_p returns a number: 1 or 0. Other functions need to return a composite value. For instance, consider a function that looks up a user's name and email address, given an ID. One way to implement this is to return a three-element list and document that the first element contains the name, and the second contains the email address. The problem with this technique is that, because Tcl does not support constants, calling procedures that returns lists in this way necessitates the use of magic numbers, e.g.:

set user_info [ad_get_user_info $user_id]
set first_name [lindex $user_info 0]
set email [lindex $user_info 1]

AOLserver/Tcl generally has three mechanisms that we like, for returning more than one value from a function. When to use which depends on the circumstances.

Using Arrays and Pass-By-Value

The one we generally prefer is returning an array get-formatted list. It has all the nice properties of pass-by-value, and it uses Tcl arrays, which have good native support.

ad_proc ad_get_user_info { user_id } {
    db_1row user_info { select first_names, last_name, email from users where user_id = :user_id }
    return [list  name "$first_names $last_name"  email $email  namelink "<a href=\"/shared/community-member?user_id=[ns_urlencode $user_id]\">$first_names $last_name</a>"  emaillink "<a href=\"mailto:$email\">$email</a>"]
}

array set user_info [ad_get_user_info $user_id]

doc_body_append "$user_info(namelink) ($user_info(emaillink))"

You could also have done this by using an array internally and using array get:

ad_proc ad_get_user_info { user_id } {
    db_1row user_info { select first_names, last_name, email from users where user_id = :user_id }
    set user_info(name) "$first_names $last_name"
    set user_info(email) $email
    set user_info(namelink) "<a href=\"/shared/community-member?user_id=[ns_urlencode $user_id]\">$first_names $last_name</a>"
    set user_info(emaillink) "<a href=\"mailto:$email\">$email</a>"
    return [array get user_info]
}

Using Arrays and Pass-By-Reference

Sometimes pass-by-value incurs too much overhead, and you'd rather pass-by-reference. Specifically, if you're writing a proc that uses arrays internally to build up some value, there are many entries in the array, and you're planning on iterating over the proc many times. In this case, pass-by-value is expensive, and you'd use pass-by-reference.

The transformation of the array into a list and back to an array takes, in our test environment, approximately 10 microseconds per entry of 100 character's length. Thus you can process about 100 entries per milisecond. The time depends almost completely on the number of entries, and almost not at all on the size of the entries.

You implement pass-by-reference in Tcl by taking the name of an array as an argument and upvar it.

ad_proc ad_get_user_info {
    -array:required
    user_id
} {
    upvar $array user_info
    db_1row user_info { select first_names, last_name, email from users where user_id = :user_id }
    set user_info(name) "$first_names $last_name"
    set user_info(email) $email
    set user_info(namelink) "<a href=\"/shared/community-member?user_id=[ns_urlencode $user_id]\">$first_names $last_name</a>"
    set user_info(emaillink) "<a href=\"mailto:$email\">$email</a>"
}

ad_get_user_info -array user_info $user_id

doc_body_append "$user_info(namelink) ($user_info(emaillink))"

We prefer pass-by-value over pass-by-reference. Pass-by-reference makes the code harder to read and debug, because changing a value in one place has side effects in other places. Especially if have a chain of upvars through several layers of the call stack, you'll have a hard time debugging.

Multisets: Using ns_sets and Pass-By-Reference

An array is a type of set, which means you can't have multiple entries with the same key. Data structures that can have multiple entries for the same key are known as a multiset or bag.

If your data can have multiple entries with the same key, you should use the AOLserver built-in ns_set. You can also do a case-insensitive lookup on an ns_set, something you can't easily do on an array. This is especially useful for things like HTTP headers, which happen to have these exact properties.

You always use pass-by-reference with ns_sets, since they don't have any built-in way of generating and reconstructing themselves from a string representation. Instead, you pass the handle to the set.

ad_proc ad_get_user_info {
    -set:required
    user_id
} {
    db_1row user_info { select first_names, last_name, email from users where user_id = :user_id }
    ns_set put $set name "$first_names $last_name"
    ns_set put $set email $email
    ns_set put $set namelink "<a href=\"/shared/community-member?user_id=[ns_urlencode $user_id]\">$first_names $last_name</a>"
    ns_set put $set emaillink "<a href=\"mailto:$email\">$email</a>"
}

set user_info [ns_set create]
ad_get_user_info -set $user_info $user_id

doc_body_append "[ns_set get $user_info namelink] ([ns_set get $user_info emaillink])"

We don't recommend ns_set as a general mechanism for passing sets (as opposed to multisets) of data. Not only do they inherently use pass-by-reference, which we dis-like, they're also somewhat clumsy to use, since Tcl doesn't have built-in syntactic support for them.

Consider for example a loop over the entries in a ns_set as compared to an array:

# ns_set variant
set size [ns_set size $myset]
for { set i 0 } { $i < $size } { incr i } {
    puts "[ns_set key $myset $i] = [ns_set value $myset $i]"
}

# array variant
foreach name [array names myarray] {
    puts "$myarray($name) = $myarray($name)"
}

And this example of constructing a value:

# ns_set variant
set myset [ns_set create]
ns_set put $myset foo $foo
ns_set put $myset baz $baz
return $myset

# array variant
return [list
    foo $foo
    baz $baz
]

ns_sets are designed to be lightweight, so memory consumption should not be a problem. However, when using ns_set get to perform lookup by name, they perform a linear lookup, whereas arrays use a hash table, so ns_sets are slower than arrays when the number of entries is large.

Using CVS for backup-recovery

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

CVS-only backup is often appropriate for development sites. If you are already using CVS and your data is not important, you probably don't need to do anything to back up your files. Just make sure that your current work is checked into the system. You can then roll back based on date - note the current system time, down to the minute. For maximum safety, you can apply a tag to your current files. You will still need to back up your database.

Note that, if you did the CVS options in this document, the /var/lib/aolserver/$OPENACS_SERVICE_NAME/etc directory is not included in cvs and you may want to add it.

[root root]# su - $OPENACS_SERVICE_NAME
[$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ cd /var/lib/aolserver/$OPENACS_SERVICE_NAME

[$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ cvs commit -m "last-minute commits before upgrade to 4.6"
cvs commit: Examining .
cvs commit: Examining bin
(many lines omitted)
[$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ cvs tag before_upgrade_to_4_6
cvs server: Tagging bin
T bin/acs-4-0-publish.sh
T bin/ad-context-server.pl
(many lines omitted)
[$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ exit
[root root]#
su - $OPENACS_SERVICE_NAME
cd /var/lib/aolserver/$OPENACS_SERVICE_NAME
cvs commit -m "last-minute commits before upgrade to 4.6"
cvs tag before_upgrade_to_4_6
exit

To restore files from a cvs tag such as the one used above:

[root root]# su - $OPENACS_SERVICE_NAME
[$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ cd /var/lib/aolserver/$OPENACS_SERVICE_NAME

[$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ cvs up -r current
[$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ exitsu - $OPENACS_SERVICE_NAME
cd /var/lib/aolserver/$OPENACS_SERVICE_NAME
cvs up -r current

Basic Caching

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

Based on a post by Dave Bauer.

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.
  1. Implement your proc as my_proc_not_cached

  2. Create a version of your proc called my_proc which wraps the non-cached version in the caching mechanism. In this example, my_proc_not_cached takes one argument, -foo, so the wrapper passes that on. The wrapper also uses the list command, to ensure that the arguments get passed correctly and to prevent commands passed in as arguments from being executed.

    ad_proc my_proc {-foo} {
            Get a cached version of my_proc.
    } {
        return [util_memoize [list my_proc_not_cached -foo $foo]]
    }
  3. In your code, always call my_proc. There will be a seperate cache item for each unique call to my_proc_not_cached so that calls with different arguments are cached seperately. You can flush the cache for each cache key by calling util_memoize_flush my_proc_not_cached args.

  4. The cached material will of course become obsolete over time. There are two ways to handle this.

    • Timed Expiration: pass in max_age to util_memoize. If the content is older than max_age, it will be re-generated.

    • Direct Flushing. In any proc which invalidates the cached content, call util_memoize_flush my_proc_not_cached args.

  5. If you are correctly flushing the cached value, then it will need to be reloaded. You may wish to pre-load it, so that the loading delay does not impact users. If you have a sequence of pages, you could call the cached proc in advance, to increase the chances that it's loaded and current when the user reaches it. Or, you can call (and discard) it immediately after flushing it.

Where did this document come from?

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

This document was created by Vinod Kurup, but it's really just plagiarism from a number of documents that came before it. If I've used something that you've written without proper credit, let me know and I'll fix it right away.

Versions 4.6.2 to present were edited by Joel Aufrecht.

These are a few of my sources:

Please also see the Credits section for more acknowledgements.

Next Page