0.00%
Search · Index

Weblog Page

Showing 61 - 70 of 230 Postings (summary)

How to Update the translations

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

  1. Identify any new locales that have been created. For each new locale, check the parameters, especially that the locale is in the format [two-letter code for language, lower-case]_[TWO-LETTER CODE FOR COUNTRY, UPPER-CASE], and create a sql command. A example sql command for creating a locale is:

    insert into ad_locales
           (locale, label, language, country, nls_language, nls_territory,
            nls_charset, mime_charset, default_p, enabled_p)
           values ('fa_IR', 'Farsi (IR)', 'fa', 'IR', 'FARSI', 'IRAN', 'AL24UTFFSS',
            'windows-1256', 't', 'f');

    Put this command into the following four files. For the upgrade files, the correct file name will depend on the exact version.

    • /packages/acs-lang/sql/postgresql/ad-locales.sql

    • /packages/acs-lang/sql/postgresql/upgrade/upgrade-current-version.sql

    • /packages/acs-lang/sql/oracle/ad-locales.sql

    • /packages/acs-lang/sql/oracle/upgrade/upgrade-current-version.sql

  2. Make a backup of the production database. Restore it as a new database. For example, if upgrading from OpenACS 5.1.1, and the site name/database name is translate-511, create translate-512b1.

  3. Check out the latest code on the release branch (e.g., oacs-5-1) as a new site, using the new site name (e.g., /var/lib/aolserver/translate-512b1. Copy over any local settings - usually, /etc/config.tcl and /etc/daemontools/run and modify appropriately. Also, copy over several translation-server-only files:

    ...TBD
              

     

  4. Shut down the production site and put up a notice (no procedure on how to do this yet.)

  5. Start the new site, and upgrade it.

  6. Go to ACS Lang admin page and click "Import All Messages"

  7. Resolve conflicts, if any, on the provided page.

  8. Back on the admin page, click the export link. If there are conflicts, the messages will be exported anyway and any errors will be shown in the web interface.

  9. Commit the message catalogs to cvs.

  10. From the packages dir, run the acs-lang/bin/check-catalog.sh script. (This checks for keys no longer in use and some other things. Until it is rolled into the UI, do it manually and check the results and take whatever steps you can intuit you should do.)

  11. CVS commit the catalog files. Done

  12. If everything went well, reconfigure the new site to take over the role of the old site (/etc/config.tcl and /etc/daemontools/run). Otherwise, bring the old site back up while investigating problems, and then repeat.

Write the Requirements and Design Specs

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

Before you get started you should make yourself familiar with the tags that are used to write your documentation. For tips on editing SGML files in emacs, see the section called “OpenACS Documentation Guide”.

It's time to document. For the tutorial we'll use pre-written documentation. When creating a package from scratch, start by copying the documentation template from /var/lib/aolserver/openacs-dev/packages/acs-core-docs/xml/docs/xml/package-documentation-template.xml to myfirstpackage/www/docs/xml/index.xml.

You then edit that file with emacs to write the requirements and design sections, generate the html, and start coding. Store any supporting files, like page maps or schema diagrams, in the www/doc/xml directory, and store png or jpg versions of supporting files in the www/doc directory.

For this tutorial, you should instead install the pre-written documentation files for the tutorial app. Log in as $OPENACS_SERVICE_NAME, create the standard directories, and copy the prepared documentation:

[$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ cd /var/lib/aolserver/$OPENACS_SERVICE_NAME/packages/myfirstpackage/
[$OPENACS_SERVICE_NAME myfirstpackage]$ mkdir -p www/doc/xml
[$OPENACS_SERVICE_NAME myfirstpackage]$ cd www/doc/xml
[$OPENACS_SERVICE_NAME xml]$ cp /var/lib/aolserver/$OPENACS_SERVICE_NAME/packages/acs-core-docs/www/files/myfirstpackage/* .
[$OPENACS_SERVICE_NAME xml]$

OpenACS uses DocBook for documentation. DocBook is an XML standard for semantic markup of documentation. That means that the tags you use indicate meaning, not intended appearance. The style sheet will determine appearance. You will edit the text in an xml file, and then process the file into html for reading.

Open the file index.xml in emacs. Examine the file. Find the version history (look for the tag <revhistory>). Add a new record to the document version history. Look for the <authorgroup> tag and add yourself as a second author. Save and exit.

Process the xml file to create html documentation. The html documentation, including supporting files such as pictures, is stored in the www/docs/ directory. A Makefile is provided to generate html from the xml, and copy all of the supporting files. If Docbook is set up correctly, all you need to do is:

[$OPENACS_SERVICE_NAME xml]$ make
cd .. ; /usr/bin/xsltproc ../../../acs-core-docs/www/xml/openacs.xsl xml/index.xml
Writing requirements-introduction.html for chapter(requirements-introduction)
Writing requirements-overview.html for chapter(requirements-overview)
Writing requirements-cases.html for chapter(requirements-cases)
Writing sample-data.html for chapter(sample-data)
Writing requirements.html for chapter(requirements)
Writing design-data-model.html for chapter(design-data-model)
Writing design-ui.html for chapter(design-ui)
Writing design-config.html for chapter(design-config)
Writing design-future.html for chapter(design-future)
Writing filename.html for chapter(filename)
Writing user-guide.html for chapter(user-guide)
Writing admin-guide.html for chapter(admin-guide)
Writing bi01.html for bibliography
Writing index.html for book
[$OPENACS_SERVICE_NAME xml]$

Verify that the documentation was generated and reflects your changes by browsing to http://yoursite:8000/myfirstpackage/doc

Install LDAP for use as external authentication

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Malte Sussdorff

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

This step by step guide on how to use LDAP for external authentication using the LDAP bind command, which differs from the approach usually taken by auth-ldap. Both will be dealt with in these section

  1. Install openldap.Download and install ns_ldap

    [root aolserver]# cd /usr/local/src/
              [root src]# wget ftp://ftp.openldap.org/pub/OpenLDAP/openldap-release/openldap-2.2.17.tgz
              [root src]# tar xvfz openldap-2.2.17.tgz
              [root src]# cd openldap-2.2.17
              [root src]# ./configure --prefix=/usr/local/openldap
              [root openldap]# make install
              [root openldap]#
    cd /usr/local/src/
    wget ftp://ftp.openldap.org/pub/OpenLDAP/openldap-release/openldap-2.2.17.tgz
    tar xvfz openldap-2.2.17.tgz
    cd openldap-2.2.17
    ./configure --prefix=/usr/local/openldap --disable-slapd
    make install
    
    
  2. Install ns_ldap.Download and install ns_ldap

    [root aolserver]# cd /usr/local/src/aolserver/
              [root aolserver]# wget http://www.sussdorff.de/ressources/nsldap.tgz
              [root aolserver]# tar xfz nsldap.tgz
              [root aolserver]# cd nsldap
              [root ns_pam-0.1]# make install LDAP=/usr/local/openldap INST=/usr/local/aolserver
              [root ns_pam-0.1]#
    cd /usr/local/src/aolserver/
    wget http://www.sussdorff.de/resources/nsldap.tgz
    tar xfz nsldap.tgz
    cd nsldap
    make install LDAP=/usr/local/openldap INST=/usr/local/aolserver
    
    
  3. Configure ns_ldap for traditional use.Traditionally OpenACS has supported ns_ldap for authentification by storing the OpenACS password in an encrypted field within the LDAP server called "userPassword". Furthermore a CN field was used for searching for the username, usually userID or something similar. This field is identical to the usernamestored in OpenACS. Therefore the login will only work if you change login method to make use of the username instead.

    • Change config.tcl. Remove the # in front of ns_param nsldap ${bindir}/nsldap.so to enable the loading of the ns_ldap module.

  4. Configure ns_ldap for use with LDAP bind.LDAP authentication usually is done by trying to bind (aka. login) a user with the LDAP server. The password of the user is not stored in any field of the LDAP server, but kept internally. The latest version of ns_ldap supports this method with the ns_ldap bind command. All you have to do to enable this is to configure auth_ldap to make use of the BIND authentification instead. Alternatively you can write a small script on how to calculate the username out of the given input (e.g. if the OpenACS username is malte.fb03.tu, the LDAP request can be translated into "ou=malte,ou=fb03,o=tu" (this example is encoded in auth_ldap and you just have to comment it out to make use of it).

Documenting Tcl Files: Page Contracts and Libraries

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Jon Salz on 3 July 2000

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.
  • Tcl procedures: /packages/acs-kernel/tcl-documentation-procs.tcl

In versions of the OpenACS prior to 3.4, the standard place to document Tcl files (both Tcl pages and Tcl library files) was in a comment at the top of the file:

#
# path from server home/filename
#
# Brief description of the file's purpose
#
# author's email address, file creation date
#
# $Id: tcl-doc.html,v 1.43 2006/07/17 05:38:32 torbenb Exp $
#

In addition, the inputs expected by a Tcl page (i.e., form variables) would be enumerated in a call to ad_page_variables, in effect, documenting the page's argument list.

The problem with these practices is that the documentation is only accessible by reading the source file itself. For this reason, ACS 3.4 introduces a new API for documenting Tcl files and, on top of that, a web-based user interface for browsing the documentation:

  • ad_page_contract: Every Tcl page has a contract that explicitly defines what inputs the page expects (with more precision than ad_page_variables) and incorporates metadata about the page (what used to live in the top-of-page comment). Like ad_page_variables, ad_page_contract also sets the specified variables in the context of the Tcl page.

  • ad_library: To be called at the top of every library file (i.e., all files in the /tcl/ directory under the server root and *-procs.tcl files under /packages/).

This has the following benefits:

  • Facilitates automatic generation of human-readable documentation.

  • Promotes security, by introducing a standard and automated way to check inputs to scripts for correctness.

  • Allows graphical designers to determine easily how to customize sites' UIs, e.g., what properties are available in templates.

  • Allows the request processor to be intelligent: a script can specify in its contract which type of abstract document it returns, and the request processor can transform it automatically into something useful to a particular user agent. (Don't worry about this for now - it's not complete for ACS 3.4.)

Currently ad_page_contract serves mostly as a replacement for ad_page_variables. Eventually, it will be integrated closely with the documents API so that each script's contract will document precisely the set of properties available to graphical designers in templates. (Document API integration is subject to change, so we don't decsribe it here yet; for now, you can just consider ad_page_contract a newer, better, documented ad_page_variables.)

Let's look at an example usage of ad_page_contract:

# /packages/acs-kernel/api-doc/www/package-view.tcl
ad_page_contract {
    version_id:integer
    public_p:optional
    kind
    { format "html" }
} {
    Shows APIs for a particular package.

    @param version_id the ID of the version whose API to view.
    @param public_p view only public APIs?
    @param kind view the type of API to view. One of <code>procs_files</code>,
        <code>procs</code>, <code>content</code>, <code>types</code>, or
        <code>gd</code>.
    @param format the format for the documentation. One of <code>html</code> or <code>xml</code>.

    @author Jon Salz (jsalz@mit.edu)
    @creation-date 3 Jul 2000
    @cvs-id $Id: tcl-doc.html,v 1.43 2006/07/17 05:38:32 torbenb Exp $
}

Note that:

  • By convention, ad_page_contract should be preceded by a comment line containing the file's path. The comment is on line 1, and the contract starts on line 2.

  • ad_page_contract's first argument is the list of expected arguments from the HTTP query (version_id, public_p, kind, and format). Like ad_page_variables, ad_page_contract sets the corresponding Tcl variables when the page is executed.

  • Arguments can have defaults, specified using the same syntax as in the Tcl proc (a two-element list where the first element is the parameter name and the second argument is the default value).

  • Arguments can have flags, specified by following the name of the query argument with a colon and one or more of the following strings (separated by commas):

    • optional: the query argument doesn't need to be provided; if it's not, the variable for that argument simply won't be set. For instance, if I call the script above without a public_p in the query, then in the page body [info exists public_p] will return 0.

    • integer: the argument must be an integer (ad_page_contract will fail and display and error if not). This flag, like the next, is intended to prevent clients from fudging query arguments to trick scripts into executing arbitrary SQL.

    • sql_identifier: the argument must be a SQL identifier (i.e., [string is wordchar $the_query_var] must return true).

    • trim: the argument will be [string trim]'ed.

    • multiple: the argument may be specified arbitrarily many times in the query string, and the variable will be set to a list of all those values (or an empty list if it's unspecified). This is analogous to the -multiple-list flag to ad_page_variables, and is useful for handling form input generated by <SELECT MULTIPLE> tags and checkboxes.

      For instance, if dest_user_id:multiple is specified in the contract, and the query string is

      ?dest_user_id=913&dest_user_id=891&dest_user_id=9
      
      

      then $dest_user_id is set to [list 913 891 9].

    • array: the argument may be specified arbitrarily many times in the query string, with parameter names with suffixes like _1, _2, _3, etc. The variable is set to a list of all those values (or an empty list if none are specified).

      For instance, if dest_user_id:array is specified in the contract, and the query string is

      ?dest_user_id_0=913&dest_user_id_1=891&dest_user_id_2=9
      
      

      then $dest_user_id is set to [list 913 891 9].

  • You can provide structured, HTML-formatted documentation for your contract. Note that format is derived heavily from Javadoc: a general description of the script's functionality, followed optionally by a series of named attributes tagged by at symbols (@). You are encouraged to provide:

    • A description of the functionality of the page. If the description contains more than one sentence, the first sentence should be a brief summary.

    • A @param tag for each allowable query argument. The format is

      @param parameter-namedescription...
      
    • An @author tag for each author. Specify the author's name, followed his or her email address in parentheses.

    • A @creation-date tag indicating when the script was first created.

    • A @cvs-id tag containing the page's CVS identification string. Just use $Id: tcl-documentation.html,v 1.2 2000/09/19 07:22:35 ron Exp $ when creating the file, and CVS will substitute an appropriate string when you check the file in.

    These @ tags are optional, but highly recommended!

ad_library provides a replacement for the informal documentation (described above) found at the beginning of every Tcl page. Instead of:

# /packages/acs-kernel/00-proc-procs.tcl
#
# Routines for defining procedures and libraries of procedures (-procs.tcl files).
#
# jsalz@mit.edu, 7 Jun 2000
#
# $Id: tcl-doc.html,v 1.43 2006/07/17 05:38:32 torbenb Exp $

you'll now write:

# /packages/acs-kernel/00-proc-procs.tcl
ad_library {

    Routines for defining procedures and libraries of procedures (<code>-procs.tcl</code>
    files).

    @creation-date 7 Jun 2000
    @author Jon Salz (jsalz@mit.edu)
    @cvs-id $Id: tcl-doc.html,v 1.43 2006/07/17 05:38:32 torbenb Exp $

}

Note that format is derived heavily from Javadoc: a general description of the script's functionality, followed optionally by a series of named attributes tagged by at symbols (@). HTML formatting is allowed. You are encouraged to provide:

  • An @author tag for each author. Specify the author's name, followed his or her email address in parentheses.

  • A @creation-date tag indicating when the script was first created.

  • A @cvs-id tag containing the page's CVS identification string. Just use $Id: tcl-documentation.html,v 1.2 2000/09/19 07:22:35 ron Exp $ when creating the file, and CVS will substitute an appropriate string when you check the file in.

Database Access API

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Jon Salz. Revised and expanded by Roberto Mello (rmello at fslc dot usu dot edu), July 2002.

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.
  • Tcl procedures: /packages/acs-kernel/10-database-procs.tcl

  • Tcl initialization: /packages/acs-kernel/database-init.tcl

One of OpenACS's great strengths is that code written for it is very close to the database. It is very easy to interact with the database from anywhere within OpenACS. Our goal is to develop a coherent API for database access which makes this even easier.

There were four significant problems with the way OpenACS previously used the database (i.e., directly through the ns_db interface):

  1. Handle management. We required code to pass database handles around, and for routines which needed to perform database access but didn't receive a database handle as input, it was difficult to know from which of the three "magic pools" (main, subquery, and log) to allocate a new handle.

  2. Nested transactions. In our Oracle driver, begin transaction really means "turn auto-commit mode off" and end transaction means "commit the current transaction and turn auto-commit mode on." Thus if transactional code needed to call a routine which needed to operate transactionally, the semantics were non-obvious. Consider:

    proc foo { db args } {
        db_transaction {
          ...
        }
    }
    
    db_transaction {
    db_dml unused "insert into greeble(bork) values(33)"
    foo $db
    db_dml unused "insert into greeble(bork) values(50)"
    }
    
    

    This would insert greeble #33 and do all the stuff in foo transactionally, but the end transaction in foo would actually cause a commit, and greeble #50 would later be inserted in auto-commit mode. This could cause subtle bugs: e.g., in the case that the insert for greeble #50 failed, part of the "transaction" would have already have been committed!. This is not a good thing.

  3. Unorthodox use of variables. The standard mechanism for mapping column values into variables involved the use of the set_variables_after_query routine, which relies on an uplevel variable named selection (likewise for set_variables_after_subquery and subselection).

  4. Hard-coded reliance on Oracle. It's difficult to write code supporting various different databases (dynamically using the appropriate dialect based on the type of database being used, e.g., using DECODE on Oracle and CASE ... WHEN on Postgres).

The Database Access API addresses the first three problems by:

  1. making use of database handles transparent

  2. wrapping common database operations (including transaction management) in Tcl control structures (this is, after all, what Tcl is good at!)

It lays the groundwork for addressing the fourth problem by assigning each SQL statement a logical name. In a future version of the OpenACS Core, this API will translate logical statement names into actual SQL, based on the type of database in use. (To smooth the learning curve, we provide a facility for writing SQL inline for a "default SQL dialect", which we assume to be Oracle for now.)

To be clear, SQL abstraction is not fully implemented in OpenACS 3.3.1. The statement names supplied to each call are not used by the API at all. The API's design for SQL abstraction is in fact incomplete; unresolved issues include:

  • how to add WHERE clause criteria dynamically

  • how to build a dynamic ORDER BY clause (Ben Adida has a proposed solution for this)

  • how to define a statement's formal interface (i.e., what bind variables it expects, what columns its SELECT clause must contain if it's a query) without actually implementing the statement in a specific SQL dialect

So why is the incremental change of adding statement naming to the API worth the effort? It is worth the effort because we know that giving each SQL statement a logical name will be required by the complete SQL abstraction design. Therefore, we know that the effort will not be wasted, and taking advantage of the new support for bind variables will already require code that uses 3.3.0 version of the API to be updated.

set_variables_after_query is gone! (Well, it's still there, but you'll never need to use it.) The new API routines set local variables automatically. For instance:

db_1row select_names "select first_names, last_name from users where user_id = [ad_get_user_id]"
doc_body_append "Hello, $first_names $last_name!"

Like ns_db 1row, this will bomb if the query doesn't return any rows (no such user exists). If this isn't what you want, you can write:

if { [db_0or1row select_names "select first_names, last_name from users where user_id = [ad_get_user_id]"] } {
    doc_body_append "Hello, $first_names $last_name!"
} else {
    # Executed if the query returns no rows.
    doc_body_append "There's no such user!"
}

Selecting a bunch of rows is a lot prettier now:

db_foreach select_names "select first_names, last_name from users" {
     doc_body_append "Say hi to $first_names $last_name for me!<br>"
}

That's right, db_foreach is now like ns_db select plus a while loop plus set_variables_after_query plus an if statement (containing code to be executed if no rows are returned).

db_foreach select_names "select first_names, last_name from users where last_name like 'S%'" {
     doc_body_append "Say hi to $first_names $last_name for me!<br>"
} if_no_rows {
     doc_body_append "There aren't any users with last names beginnings with S!"
}

The new API keeps track of which handles are in use, and automatically allocates new handles when they are necessary (e.g., to perform subqueries while a select is active). For example:

doc_body_append "<ul>"
db_foreach select_names "select first_names, last_name, user_id from users" {
    # Automatically allocated a database handle from the main pool.
    doc_body_append "<li>User $first_names $last_name\n<ul>"

    db_foreach select_groups "select group_id from user_group_map where user_id = $user_id" {
        # There's a selection in progress, so we allocated a database handle
        # from the subquery pool for this selection.
        doc_body_append "<li>Member of group #$group_id.\n"
    } if_no_rows {
        # Not a member of any groups.
        doc_body_append "<li>Not a member of any group.\n"
    }
}
doc_body_append "</ul>"
db_release_unused_handles

A new handle isn't actually allocated and released for every selection, of course - as a performance optimization, the API keeps old handles around until db_release_unused_handles is invoked (or the script terminates).

Note that there is no analogue to ns_db gethandle - the handle is always automatically allocated the first time it's needed.

Introduction

Most SQL statements require that the code invoking the statement pass along data associated with that statement, usually obtained from the user. For instance, in order to delete a WimpyPoint presentation, a Tcl script might use the SQL statement

delete from wp_presentations where presentation_id = some_presentation_id

where some_presentation_id is a number which is a valid presentation ID of the presentation I want to delete. It's easy to write code handling situations like this since SQL statements can include bind variables, which represent placeholders for actual data. A bind variable is specified as a colon followed by an identifier, so the statement above can be coded as:

db_dml presentation_delete {
    delete from wp_presentations where presentation_id = :some_presentation_id
}

When this SQL statement is invoked, the value for the bind variable :some_presentation_id is pulled from the Tcl variable $some_presentation_id (in the caller's environment). Note that bind variables are not limited to one per statement; you can use an arbitrary number, and each will pull from the correspondingly named Tcl variable. (Alternatively, you can also specify an list or ns_set providing bind variables' values; see Usage.)

The value of a bind variable is taken literally by the database driver, so there is never any need to put single-quotes around the value for a bind variable, or to use db_quote to escape single-quotes contained in the value. The following works fine, despite the apostrophe:

set exclamation "That's all, folks!"
db_dml exclamation_insert { insert into exclamations(exclamation) values(:exclamation) }

Note that you can use a bind variable in a SQL statement only where you could use a literal (a number or single-quoted string). Bind variables cannot be placeholders for things like SQL keywords, table names, or column names, so the following will not work, even if $table_name is set properly:

select * from :table_name

Why Bind Variables Are Useful

Why bother with bind variables at all - why not just write the Tcl statement above like this:

db_dml presentation_delete "
    delete from wp_presentations where presentation_id = $some_presentation_id
"

(Note the use of double-quotes to allow the variable reference to $some_presentation_id to be interpolated in.) This will work, but consider the case where some devious user causes some_presentation_id to be set to something like '3 or 1 = 1', which would result in the following statement being executed:

delete from wp_presentations where presentation_id = 3 or 1 = 1

This deletes every presentation in the database! Using bind variables eliminates this gaping security hole: since bind variable values are taken literally. Oracle will attempt to delete presentations whose presentation ID is literally '3 or 1 = 1' (i.e., no presentations, since '3 or 1 = 1' can't possibly be a valid integer primary key for wp_presentations. In general, since Oracle always considers the values of bind variables to be literals, it becomes more difficult for users to perform URL surgery to trick scripts into running dangerous queries and DML.

Usage

Every db_* command accepting a SQL command as an argument supports bind variables. You can either

  • specify the -bind switch to provide a set with bind variable values, or

  • specify the -bind switch to explicitly provide a list of bind variable names and values, or

  • not specify a bind variable list at all, in which case Tcl variables are used as bind variables.

The default behavior (i.e., if the -bind switch is omitted) is that these procedures expect to find local variables that correspond in name to the referenced bind variables, e.g.:

set user_id 123456
set role "administrator"

db_foreach user_group_memberships_by_role {
    select g.group_id, g.group_name
    from user_groups g, user_group_map map
    where g.group_id = map.user_id
    and map.user_id = :user_id
    and map.role = :role
} {
    # do something for each group of which user 123456 is in the role
    # of "administrator"
}

The value of the local Tcl variable user_id (123456) is bound to the user_id bind variable.

The -bind switch can takes the name of an ns_set containing keys for each bind variable named in the query, e.g.:

set bind_vars [ns_set create]
ns_set put $bind_vars user_id 123456
ns_set put $bind_vars role "administrator"

db_foreach user_group_memberships_by_role {
    select g.group_id, g.group_name
    from user_groups g, user_group_map map
    where g.group_id = map.user_id
    and map.user_id = :user_id
    and map.role = :role
} -bind $bind_vars {
    # do something for each group in which user 123456 has the role
    # of "administrator"
}

Alternatively, as an argument to -bind you can specify a list of alternating name/value pairs for bind variables:

db_foreach user_group_memberships_by_role {
    select g.group_id, g.group_name
    from user_groups g, user_group_map map
    where g.group_id = map.user_id
    and map.user_id = :user_id
    and map.role = :role
} -bind [list user_id 123456 role "administrator"] {
    # do something for each group in which user 123456 has the role
    # of "administrator"
}

Nulls and Bind Variables

When processing a DML statement, Oracle coerces empty strings into null. (This coercion does not occur in the WHERE clause of a query, i.e. col = '' and col is null are not equivalent.)

As a result, when using bind variables, the only way to make Oracle set a column value to null is to set the corresponding bind variable to the empty string, since a bind variable whose value is the string "null" will be interpreted as the literal string "null".

These Oracle quirks complicate the process of writing clear and abstract DML difficult. Here is an example that illustrates why:

#
# Given the table:
#
#   create table foo (
#           bar        integer,
#           baz        varchar(10)
#   );
#

set bar ""
set baz ""

db_dml foo_create "insert into foo(bar, baz) values(:bar, :baz)"
#
# the values of the "bar" and "baz" columns in the new row are both
# null, because Oracle has coerced the empty string (even for the
# numeric column "bar") into null in both cases

Since databases other than Oracle do not coerce empty strings into null, this code has different semantics depending on the underlying database (i.e., the row that gets inserted may not have null as its column values), which defeats the purpose of SQL abstraction.

Therefore, the Database Access API provides a database-independent way to represent null (instead of the Oracle-specific idiom of the empty string): db_null.

Use it instead of the empty string whenever you want to set a column value explicitly to null, e.g.:

set bar [db_null]
set baz [db_null]

db_dml foo_create "insert into foo(bar, baz) values(:bar, :baz)"
#
# sets the values for both the "bar" and "baz" columns to null

We now require that each SQL statement be assigned a logical name for the statement that is unique to the procedure or page in which it is defined. This is so that (eventually) we can implement logically named statements with alternative SQL for non-Oracle databases (e.g., Postgres). More on this later.

Normally, db_foreach, db_0or1row, and db_1row places the results of queries in Tcl variables, so you can say:

db_foreach users_select "select first_names, last_name from users" {
    doc_body_append "<li>$first_names $last_name\n"
}

However, sometimes this is not sufficient: you may need to examine the rows returned, to dynamically determine the set of columns returned by the query, or to avoid collisions with existing variables. You can use the -column_array and -column_set switches to db_foreach, db_0or1row, and db_1row to instruct the database routines to place the results in a Tcl array or ns_set, respectively, where the keys are the column names and the values are the column values. For example:

db_foreach users_select "select first_names, last_name from users" -column_set columns {
    # Now $columns is an ns_set.
    doc_body_append "<li>"
    for { set i 0 } { $i < [ns_set size $columns] } { incr i } {
        doc_body_append "[ns_set key $columns $i] is [ns_set value $columns $i]. \n"
    }
}

will write something like:

  • first_names is Jon. last_name is Salz.

  • first_names is Lars. last_name is Pind.

  • first_names is Michael. last_name is Yoon.

Note that you never have to use ns_db anymore (including ns_db gethandle)! Just start doing stuff, and (if you want) call db_release_unused_handles when you're done as a hint to release the database handle.

db_null
db_null

Returns a value which can be used in a bind variable to represent the SQL value null. See Nulls and Bind Variables above.

db_foreach
db_foreachstatement-name sql [ -bind bind_set_id | -bind bind_value_list ]  [ -column_array array_name | -column_set set_name ]  code_block [ if_no_rows if_no_rows_block ]

Performs the SQL query sql, executing code_block once for each row with variables set to column values (or a set or array populated if -column_array or column_set is specified). If the query returns no rows, executes if_no_rows_block (if provided).

Example:

db_foreach select_foo "select foo, bar from greeble" {
    doc_body_append "<li>foo=$foo; bar=$bar\n"
} if_no_rows {
    doc_body_append "<li>There are no greebles in the database.\n"
}

The code block may contain break statements (which terminate the loop and flush the database handle) and continue statements (which continue to the next row of the loop).

db_1row
db_1rowstatement-namesql [ -bind bind_set_id | -bind bind_value_list ]  [ -column_array array_name | -column_set set_name ]

Performs the SQL query sql, setting variables to column values. Raises an error if the query does not return exactly 1 row.

Example:

db_1row select_foo "select foo, bar from greeble where greeble_id = $greeble_id"
# Bombs if there's no such greeble!
# Now $foo and $bar are set.

db_0or1row
db_0or1rowstatement-namesql [ -bind bind_set_id | -bind bind_value_list ]  [ -column_array array_name | -column_set set_name ]

Performs the SQL query sql. If a row is returned, sets variables to column values and returns 1. If no rows are returned, returns 0. If more than one row is returned, throws an error.

db_string
db_stringstatement-namesql [ -default default ] [ -bind bind_set_id | -bind bind_value_list ]

Returns the first column of the result of SQL query sql. If sql doesn't return a row, returns default (or throws an error if default is unspecified). Analogous to database_to_tcl_string and database_to_tcl_string_or_null.

db_nextval
db_nextvalsequence-name

Returns the next value for the sequence sequence-name (using a SQL statement like SELECTsequence-name.nextval FROM DUAL). If sequence pooling is enabled for the sequence, transparently uses a value from the pool if available to save a round-trip to the database.

db_list
db_liststatement-namesql [ -bind bind_set_id | -bind bind_value_list ]

Returns a Tcl list of the values in the first column of the result of SQL query sql. If sql doesn't return any rows, returns an empty list. Analogous to database_to_tcl_list.

db_list_of_lists
db_list_of_listsstatement-namesql [ -bind bind_set_id | -bind bind_value_list ]

Returns a Tcl list, each element of which is a list of all column values in a row of the result of SQL query sql. If sql doesn't return any rows, returns an empty list. (Analogous to database_to_tcl_list_list.)

db_list_of_ns_sets
db_list_of_ns_setsstatement-namesql [ -bind bind_set_id | -bind bind_value_list ]

Returns a list of ns_sets with the values of each column of each row returned by the sql query specified.

db_dml
db_dmlstatement-namesql  [ -bind bind_set_id | -bind bind_value_list ]  [ -blobs blob_list | -clobs clob_list |
      -blob_files blob_file_list | -clob_files clob_file_list ]

Performs the DML or DDL statement sql.

If a length-n list of blobs or clobs is provided, then the SQL should return n blobs or clobs into the bind variables :1, :2, ... :n. blobs or clobs, if specified, should be a list of individual BLOBs or CLOBs to insert; blob_files or clob_files, if specified, should be a list of paths to files containing the data to insert. Only one of -blobs, -clobs, -blob_files, and -clob_files may be provided.

Example:

db_dml insert_photos "
        insert photos(photo_id, image, thumbnail_image)
        values(photo_id_seq.nextval, empty_blob(), empty_blob())
        returning image, thumbnail_image into :1, :2
    "  -blob_files [list "/var/tmp/the_photo" "/var/tmp/the_thumbnail"]

This inserts a new row into the photos table, with the contents of the files /var/tmp/the_photo and /var/tmp/the_thumbnail in the image and thumbnail columns, respectively.

db_write_clob, db_write_blob, db_blob_get_file
db_write_clobstatement-namesql [ -bind bind_set_id | -bind bind_value_list ]

db_write_blobstatement-namesql [ -bind bind_set_id | -bind bind_value_list ]

db_blob_get_filestatement-namesql [ -bind bind_set_id | -bind bind_value_list ]

Analagous to ns_ora write_clob/write_blob/blob_get_file.

db_release_unused_handles
db_release_unused_handles

Releases any allocated, unused database handles.

db_transaction
db_transactioncode_block [ on_error { code_block } ]

Executes code_block transactionally. Nested transactions are supported (end transaction is transparently ns_db dml'ed when the outermost transaction completes). The db_abort_transaction command can be used to abort all levels of transactions. It is possible to specify an optional on_error code block that will be executed if some code in code_block throws an exception. The variable errmsg will be bound in that scope. If there is no on_error code, any errors will be propagated.

Example:

proc replace_the_foo { col } {
    db_transaction {
        db_dml "delete from foo"
        db_dml "insert into foo(col) values($col)"
    }
}

proc print_the_foo {} {
    doc_body_append "foo is [db_string "select col from foo"]<br>\n"
}

replace_the_foo 8
print_the_foo ; # Writes out "foo is 8"

db_transaction {
    replace_the_foo 14
    print_the_foo ; # Writes out "foo is 14"
    db_dml "insert into some_other_table(col) values(999)"
    ...
    db_abort_transaction
} on_error {
    doc_body_append "Error in transaction: $errmsg"
}


print_the_foo ; # Writes out "foo is 8"

db_abort_transaction
db_abort_transaction

Aborts all levels of a transaction. That is if this is called within several nested transactions, all of them are terminated. Use this insetead of db_dml "abort" "abort transaction".

db_multirow
db_multirow [ -local ] [ -append ] [ -extend column_list ]  var-namestatement-namesql  [ -bind bind_set_id | -bind bind_value_list ]  code_block [ if_no_rows if_no_rows_block ]

Performs the SQL query sql, saving results in variables of the form var_name:1, var_name:2, etc, setting var_name:rowcount to the total number of rows, and setting var_name:columns to a list of column names.

Each row also has a column, rownum, automatically added and set to the row number, starting with 1. Note that this will override any column in the SQL statement named 'rownum', also if you're using the Oracle rownum pseudo-column.

If the -local is passed, the variables defined by db_multirow will be set locally (useful if you're compiling dynamic templates in a function or similar situations).

You may supply a code block, which will be executed for each row in the loop. This is very useful if you need to make computations that are better done in Tcl than in SQL, for example using ns_urlencode or ad_quotehtml, etc. When the Tcl code is executed, all the columns from the SQL query will be set as local variables in that code. Any changes made to these local variables will be copied back into the multirow.

You may also add additional, computed columns to the multirow, using the -extend { col_1col_2 ... } switch. This is useful for things like constructing a URL for the object retrieved by the query.

If you're constructing your multirow through multiple queries with the same set of columns, but with different rows, you can use the -append switch. This causes the rows returned by this query to be appended to the rows already in the multirow, instead of starting a clean multirow, as is the normal behavior. The columns must match the columns in the original multirow, or an error will be thrown.

Your code block may call continue in order to skip a row and not include it in the multirow. Or you can call break to skip this row and quit looping.

Notice the nonstandard numbering (everything else in Tcl starts at 0); the reason is that the graphics designer, a non programmer, may wish to work with row numbers.

Example:

db_multirow -extend { user_url } users users_query {
    select user_id first_names, last_name, email from cc_users
} {
    set user_url [acs_community_member_url -user_id $user_id]
}
    
db_resultrows
db_resultrows

Returns the number of rows affected or returned by the previous statement.

db_with_handle
db_with_handlevarcode_block

Places a database handle into the variable var and executes code_block. This is useful when you don't want to have to use the new API (db_foreach, db_1row, etc.), but need to use database handles explicitly.

Example:

proc lookup_the_foo { foo } {
    db_with_handle db {
        return [db_string unused "select ..."]
    }
}

db_with_handle db {
    # Now there's a database handle in $db.
    set selection [ns_db select $db "select foo from bar"]
    while { [ns_db getrow $db $selection] } {
        set_variables_after_query

        lookup_the_foo $foo
    }
}

db_name
db_name

Returns the name of the database, as returned by the driver.

db_type
db_type

Returns the RDBMS type (i.e. oracle, postgresql) this OpenACS installation is using. The nsv ad_database_type is set up during the bootstrap process.

db_compatible_rdbms_p
db_compatible_rdbms_p db_type
		

Returns 1 if the given db_type is compatible with the current RDBMS.

db_package_supports_rdbms_p
db_package_supports_rdbms_p db_type_list
		

Returns 1 if db_type_list contains the current RDMBS type. A package intended to run with a given RDBMS must note this in it's package info file regardless of whether or not it actually uses the database.

db_legacy_package_p
db_legacy_package_p db_type_list
		

Returns 1 if the package is a legacy package. We can only tell for certain if it explicitly supports Oracle 8.1.6 rather than the OpenACS more general oracle.

db_version
db_version

Returns the RDBMS version (i.e. 8.1.6 is a recent Oracle version; 7.1 a recent PostgreSQL version.

db_current_rdbms
db_current_rdbms

Returns the current rdbms type and version.

db_known_database_types
db_known_database_types

Returns a list of three-element lists describing the database engines known to OpenACS. Each sublist contains the internal database name (used in file paths, etc), the driver name, and a "pretty name" to be used in selection forms displayed to the user.

The nsv containing the list is initialized by the bootstrap script and should never be referenced directly by user code. Returns the current rdbms type and version.

Install Daemontools (OPTIONAL)

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

Daemontools is a collection of programs for controlling other processes. We use daemontools to run and monitor AOLserver. It is installed in /package. These commands install daemontools and svgroup. svgroup is a script for granting permissions, to allow users other than root to use daemontools for specific services.

  1. Install Daemontools

    download daemontools and install it.

    • Red Hat 8

      [root root]# mkdir -p /package
      [root root]# chmod 1755 /package/
      [root root]# cd /package/
      [root package]# tar xzf /tmp/daemontools-0.76.tar.gz
      [root package]# cd admin/daemontools-0.76/
      [root daemontools-0.76]# package/install
      Linking ./src/* into ./compile...
      
      Creating /service...
      Adding svscanboot to inittab...
      init should start svscan now.
      [root root]#
      mkdir -p /package
      chmod 1755 /package
      cd /package
      tar xzf /tmp/daemontools-0.76.tar.gz
      cd admin/daemontools-0.76
      package/install
      
    • Red Hat 9, Fedora Core 1-4

      Make sure you have the source tarball in /tmp, or download it.

      [root root]# mkdir -p /package
      [root root]# chmod 1755 /package/
      [root root]# cd /package/
      [root package]# tar xzf /tmp/daemontools-0.76.tar.gz
      [root package]# cd admin
      [root admin]# wget http://www.qmail.org/moni.csi.hu/pub/glibc-2.3.1/daemontools-0.76.errno.patch
      --14:19:24--  http://moni.csi.hu/pub/glibc-2.3.1/daemontools-0.76.errno.patch
                 => `daemontools-0.76.errno.patch'
      Resolving moni.csi.hu... done.
      Connecting to www.qmail.org[141.225.11.87]:80... connected.
      HTTP request sent, awaiting response... 200 OK
      Length: 355 [text/plain]
      
      100%[====================================>] 355          346.68K/s    ETA 00:00
      
      14:19:24 (346.68 KB/s) - `daemontools-0.76.errno.patch' saved [355/355]
      
      [root admin]# cd daemontools-0.76
      [root daemontools-0.76]# patch -p1 < ../daemontools-0.76.errno.patch
      [root daemontools-0.76]# package/install
      Linking ./src/* into ./compile...(many lines omitted)
      Creating /service...
      Adding svscanboot to inittab...
      init should start svscan now.
      [root root]#
      mkdir -p /package
      chmod 1755 /package
      cd /package
      tar xzf /tmp/daemontools-0.76.tar.gz
      cd admin
      wget http://moni.csi.hu/pub/glibc-2.3.1/daemontools-0.76.errno.patch
      cd daemontools-0.76
      patch -p1 < ../daemontools-0.76.errno.patch
      package/install
      
    • FreeBSD (follow standard install)

      Make sure you have the source tarball in /tmp, or download it.

      [root root]# mkdir -p /package
      [root root]# chmod 1755 /package/
      [root root]# cd /package/
      [root package]# tar xzf /tmp/daemontools-0.76.tar.gz
      [root package]# cd admin/daemontools-0.76
      [root daemontools-0.76]# package/install
      Linking ./src/* into ./compile...(many lines omitted)
      Creating /service...
      Adding svscanboot to inittab...
      init should start svscan now.
      [root root]#
      mkdir -p /package
      chmod 1755 /package
      cd /package
      tar xzf /tmp/daemontools-0.76.tar.gz
      cd admin/daemontools-0.76
      package/install
      
    • Debian

      [root ~]# apt-get install daemontools-installer
      [root ~]# build-daemontools
      
  2. Verify that svscan is running. If it is, you should see these two processes running:

    [root root]# ps -auxw | grep service
    root     13294  0.0  0.1  1352  272 ?        S    09:51   0:00 svscan /service
    root     13295  0.0  0.0  1304  208 ?        S    09:51   0:00 readproctitle service errors: .......................................
    [root root]#
  3. Install a script to grant non-root users permission to control daemontools services.

    [root root]# cp /tmp/openacs-5.2.3rc1/packages/acs-core-docs/www/files/svgroup.txt /usr/local/bin/svgroup
    [root root]# chmod 755 /usr/local/bin/svgroupcp /tmp/openacs-5.2.3rc1/packages/acs-core-docs/www/files/svgroup.txt /usr/local/bin/svgroup
    chmod 755 /usr/local/bin/svgroup
    

Connect to a second database

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

It is possible to use the OpenACS TCL database API with other databases. In this example, the OpenACS site uses a PostGre database, and accesses another PostGre database called legacy.

  1. Modify config.tcl to accomodate the legacy database, and to ensure that the legacy database is not used for standard OpenACS queries:

    ns_section ns/db/pools
    ns_param   pool1              "Pool 1"
    ns_param   pool2              "Pool 2"
    ns_param   pool3              "Pool 3"
    ns_param   legacy             "Legacy"
    
    ns_section ns/db/pool/pool1
    #Unchanged from default
    ns_param   maxidle            1000000000
    ns_param   maxopen            1000000000
    ns_param   connections        5
    ns_param   verbose            $debug
    ns_param   extendedtableinfo  true
    ns_param   logsqlerrors       $debug
    if { $database == "oracle" } {
        ns_param   driver             ora8
        ns_param   datasource         {}
        ns_param   user               $db_name
        ns_param   password           $db_password
    } else {
        ns_param   driver             postgres
        ns_param   datasource         ${db_host}:${db_port}:${db_name}
        ns_param   user               $db_user
        ns_param   password           ""
    }
    
    ns_section ns/db/pool/pool2
    #Unchanged from default, removed for clarity
    
    ns_section ns/db/pool/pool3
    #Unchanged from default, removed for clarity
    
    ns_section ns/db/pool/legacy
    ns_param   maxidle            1000000000
    ns_param   maxopen            1000000000
    ns_param   connections        5
    ns_param   verbose            $debug
    ns_param   extendedtableinfo  true
    ns_param   logsqlerrors       $debug
    ns_param   driver             postgres
    ns_param   datasource         ${db_host}:${db_port}:legacy_db
    ns_param   user               legacy_user
    ns_param   password           legacy_password
    
    
    ns_section ns/server/${server}/db
    ns_param   pools              *
    ns_param   defaultpool        pool1
    
    ns_section ns/server/${server}/acs/database
    ns_param database_names [list main legacy]
    ns_param pools_main [list pool1 pool2 pool3]
    ns_param pools_legacy [list legacy]
  2. To use the legacy database, use the -dbn flag for any of the db_ API calls. For example, suppose there is a table called "foo" in the legacy system, with a field "bar". List "bar" for all records with this tcl file:

    db_foreach -dbn legacy get_bar_query {
      select bar from foo
      limit 10
    } {
      ns_write "<br/>$bar"
    }

Constraint naming standard

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Michael Bryzek

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

Constraint naming standard is important for one reason: The SYS_* name oracle assigns to unnamed constraints is not very understandable. By correctly naming all contraints, we can quickly associate a particular constraint with our data model. This gives us two real advantages:

  • We can quickly identify and fix any errors.

  • We can reliabily modify or drop constraints

Why do we need a naming convention?

Oracle limits names, in general, to 30 characters, which is hardly enough for a human readable constraint name.

We propose the following naming convention for all constraints, with the following abbreviations taken from Oracle Docs at http://oradoc.photo.net/ora81/DOC/server.815/a67779/ch4e.htm#8953. Note that we shortened all of the constraint abbrevations to two characters to save room.

Constraint type Abbreviation
references (foreign key) fk
unique un
primary key pk
check ck
not null nn

<table name>_<column_name>_<constraint abbreviation>

In reality, this won't be possible because of the character limitation on names inside oracle. When the name is too long, we will follow these two steps in order:

  1. Abbreviate the table name with the table's initials (e.g. users -> u and users_contact -> uc).

  2. Truncate the column name until it fits.

If the constraint name is still too long, you should consider rewriting your entire data model :)

Notes:

  • If you have to abbreviate the table name for one of the constraints, abbreviate it for all the constraints

  • If you are defining a multi column constraint, try to truncate the two column names evenly

create table example_topics (
       topic_id    integer
		   constraint example_topics_topic_id_pk
		   primary key
);

create table constraint_naming_example (
       example_id		      integer
				      constraint cne_example_id_pk
				      primary key,
       one_line_description	      varchar(100)
				      constraint cne_one_line_desc_nn
				      not null,
       body			      clob,
       up_to_date_p		      char(1) default('t')
				      constraint cne_up_to_date_p_check
				      check(up_to_date_p in ('t','f')),
       topic_id			      constraint cne_topic_id_nn not null
				      constraint cne_topic_id_fk references example_topics,
       -- Define table level constraint
       constraint cne_example_id_one_line_unq unique(example_id, one_line_description)
);

Naming primary keys might not have any obvious advantages. However, here's an example where naming the primary key really helps (and this is by no means a rare case!

SQL> set autotrace traceonly explain;


SQL> select * from constraint_naming_example, example_topics
where constraint_naming_example.topic_id = example_topics.topic_id;

Execution Plan
----------------------------------------------------------
   0	  SELECT STATEMENT Optimizer=CHOOSE
   1	0   NESTED LOOPS
   2	1     TABLE ACCESS (FULL) OF 'CONSTRAINT_NAMING_EXAMPLE'
   3	1     INDEX (UNIQUE SCAN) OF 'EXAMPLE_TOPICS_TOPIC_ID_PK' (UNI
	  QUE)

Isn't it nice to see "EXAMPLE_TOPICS_TOPIC_ID_PK" in the trace and know exactly which table oracle is using at each step?

People disagree on whether or not we should be naming not null constraints. So, if you want to name them, please do so and follow the above naming standard. But, naming not null constraints is not a requirement.

About Naming the not null constraints

Though naming "not null" constraints doesn't help immeditately in error debugging (e.g. the error will say something like "Cannot insert null value into column"), we recommend naming not null constraints to be consistent in our naming of all constraints.

Adding Comments

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

You can track comments for any ACS Object. Here we'll track comments for notes. On the note-edit.tcl/adp pair, which is used to display individual notes, we want to put a link to add comments at the bottom of the screen. If there are any comments, we want to show them.

First, we need to generate a url for adding comments. In note-edit.tcl:

 set comment_add_url "[general_comments_package_url]comment-add?[export_vars {
  { object_id $note_id }
  { object_name $title }
  { return_url "[ad_conn url]?[ad_conn query]"}
 }]"
 

This calls a global, public tcl function that the general_comments package registered, to get its url. You then embed in that url the id of the note and its title, and set the return_url to the current url so that the user can return after adding a comment.

We need to create html that shows any existing comments. We do this with another general_comments function:

set comments_html [general_comments_get_comments
     -print_content_p 1 $note_id]

First, we pass in an optional parameter that that says to actually show the contents of the comments, instead of just the fact that there are comments. Then you pass the note id, which is also the acs_object id.

We put our two new variables in the note-edit.adp page.

<a href="@comment_add_url@">Add a comment</a>
 @comments_html@

Request Processor Requirements

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Rafael H. Schloming

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

The following is a requirements document for the OpenACS 4.0 request processor. The major enhancements in the 4.0 version include a more sophisticated directory mapping system that allows package pageroots to be mounted at arbitrary urls, and tighter integration with the database to allow for flexible user controlled url structures, and subsites.

Most web servers are designed to serve pages from exactly one static pageroot. This restriction can become cumbersome when trying to build a web toolkit full of reusable and reconfigurable components.

The request processor's functionality can be split into two main pieces.

  1. Set up the environment in which a server side script expects to run. This includes things like:

    • Initialize common variables associated with a request.

    • Authenticate the connecting party.

    • Check that the connecting party is authorized to proceed with the request.

    • Invoke any filters associated with the request URI.

  2. Determine to which entity the request URI maps, and deliver the content provided by this entity. If this entity is a proc, then it is invoked. If this entitty is a file then this step involves determining the file type, and the manner in which the file must be processed to produce content appropriate for the connecting party. Eventually this may also require determining the capabilities of the connecting browser and choosing the most appropriate form for the delivered content.

It is essential that any errors that occur during the above steps be reported to developers in an easily decipherable manner.

10.0 Multiple Pageroots

10.10 Pageroots may be combined into one URL space.

10.20 Pageroots may be mounted at more than one location in the URL space.

20.0 Application Context

20.10 The request processor must be able to determine a primary context or state associated with a pageroot based on it's location within the URL space.

30.0 Authentication

30.10 The request processor must be able to verify that the connecting browser actually represents the party it claims to represent.

40.0 Authorization

40.10 The request processor must be able to verify that the party the connecting browser represents is allowed to make the request.

50.0 Scalability

Next Page