0.00%
Search · Index

Weblog Page

Showing 121 - 130 of 230 Postings (summary)

Bootstrapping OpenACS

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Jon Salz

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.
  • Tcl code: /tcl/0-acs-init.tcl and /packages/acs-kernel/bootstrap.tcl

This document describes the startup (bootstrapping) process for an AOLserver running OpenACS.

Before OpenACS 3.3, the OpenACS startup process was extremely simple: after AOLserver performed its internal initialization (reading the configuration file, loading shared libraries and module code, etc.) it scanned through the Tcl library directory (generally /var/lib/aolserver/yourservername/tcl), sourcing each file in sequence.

While this overall structure for initialization is still intact, package management has thrown a wrench into the works - there are a few extra things to do during initialization, most notably:

  • Examine the OpenACS file tree for files that should not be present in OpenACS (i.e., that were once part of the OpenACS distribution but have since been removed).

  • Scan the /packages directory for new packages.

  • Initialize enabled packages by sourcing their *-procs.tcl and *-init.tcl files.

This document examines in detail each of the steps involved in AOLserver/OpenACS startup.

As soon as the nsd daemon is executed by the init process (or otherwise), AOLserver reads its configuration file and chroots itself if necessary. It then loads shared libraries indicated in the .ini file (e.g., the Oracle driver and nssock), and sources Tcl module files (generally in /home/aol30/modules/tcl). This step is, and has always been, the same for all AOLservers, regardless of whether they are running OpenACS.

Next AOLserver sources, in lexicographical order, each file in the /tcl directory. The first such file is 0-acs-init.tcl, which doesn't do much directly except to determine the OpenACS path root (e.g., /var/lib/aolserver/yourservername) by trimming the final component from the path to the Tcl library directory (/var/lib/aolserver/yourservername/tcl). But 0-acs-init.tcl's has an important function, namely sourcing /packages/acs-core/bootstrap.tcl, which does the following:

  1. Initialize some NSVs used by the core. These NSVs are documented in /packages/acs-core/apm-procs.tcl - no need to worry about them unless you're an OpenACS core hacker.

  2. Verify the deletion of obsolete OpenACS files. The /tcl directory has evolved quite a bit over the months and years, and a few files have come and gone. The /www/doc/removed-files.txt file contains a list of files which must be deleted from the AOLserver installation, at the risk of causing weird conflicts, e.g., having several security filters registered. bootstrap.tcl scans through this list, logging error messages to the log if any of these files exist.

  3. Source *-procs.tcl files in the OpenACS core. We source each file matching the *-procs.tcl glob in the /packages/acs-kernel directory, in lexicographical order. These procedure are needed to perform any of the following steps.

  4. Ensure that the database is available by grabbing and releasing a handle. If we can't obtain a handle, we terminate initialization (since OpenACS couldn't possibly start up the server without access to the database).

  5. Register any new packages in the /packages directory. In each directory inside /packages, we look for a .info file; if we find a package that hasn't yet been registered with the package manager (i.e., it's been copied there manually), we insert information about it into the database. (The first time OpenACS starts up, no packages will have been registered in the database yet, so this step will registers every single package in the /packages directory.) Note that packages discovered here are initially disabled; they must be manually enabled in the package manager before they can be used.

  6. Ensure that the acs-kernel package is enabled. If the OpenACS core isn't initialized, the server couldn't possibly be operational, so if there's no enabled version of the OpenACS core we simply mark the latest installed one as enabled.

  7. Load *-procs.tcl files for enabled packages, activating their APIs.

  8. Load *-init.tcl files for enabled packages, giving packages a chance to register filters and procedures, initialize data structures, etc.

  9. Verify that the core has been properly initialized by checking for the existence of an NSV created by the request processor initialization code. If it's not present, the server won't be operational, so we log an error.

At this point, bootstrap.tcl is done executing. AOLserver proceeds to source the remaining files in the /tcl directory (i.e., unpackaged libraries) and begins listening for connections.

Groups, Context, Permissions

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Pete Su

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

The OpenACS 5.2.3rc1 Permissions system allows developers and administrators to set access control policies at the object level, that is, any application or system object represented by a row in the acs_objects table can be access-controlled via a PL/SQL or Tcl interface. The permissions system manages a data model that then allows scripts to check permissions using another API call.

Although object level permissions seems appropriate, no developer or administrator wants to explicitly set access control rights for every user and every object on a site. Therefore, OpenACS has two auxiliary mechanisms for making this easier:

  1. the Groups system allows users to be grouped together in flexible ways.

  2. the object model defines a notion of object context, which allows applications to group objects together into larger security domains.

The rest of this document discusses each of these parts, and how they fit together with the permissions system.

OpenACS 5.2.3rc1 has an abstraction called a party. Parties have a recursive definition. We can illustrate how it works with the following simplified data model. First, we define the parties table, where each party has an email address and a URL for contact information.

create table parties (
    party_id  integer not null references acs_objects(object_id),
    email varchar(100),
    url varchar(100)
)

Now we define two subtypes of party, one for persons, and one for groups:

create table groups (
    group_id  not null references parties(party_id),
    group_name varchar(100) not null
)

create table persons (
    person_id not null references parties(party_id),
    first_names varchar(100) not null,
    last_name varchar(100) not null
)

The users table is also defined in this data model as a subtype of person.

Finally, we define two relations, one for group membership and one for group composition.

The composition relation expresses that every member of group A should also be a member of group B. This relation allows us to define a hierarchy of groups.

The membership relation maps groups to parties. Each member of a group is a party rather than just a user. That is, groups consist of members that are either a person or an entire group. This allows us to say that group A should be a member of another group B.

The groups data model is recursive. Modelling parties as either a person or a group provides a way to model complex hierarchical groupings of persons and groups.

The full details of the groups data model is beyond the scope of this tutorial. See Parties in OpenACS or OpenACS 4 Groups Design for more details.

NOTE: Much more detailed information about the permissions system and how to use it is available in the OpenACS Permissions Tediously Explained document.

The permissions data model is a mapping between privileges, parties and objects. Parties and objects have already been discussed. Now we focus on privileges.

In OpenACS, a privilege describes the right to perform some operation on some object. Privileges are the basic units out of which we build access control policies. For example in the Unix filesystem, access is controlled by granting users some combination of read, write, or execute privileges on files and directories. In OpenACS 5.2.3rc1, the table of privileges is organized hierarchically so that developers can define privileges that aggregate some set of privileges together. For example, if we have read, write, create and delete privileges, it might be convenient to combine them into a new privilege called "admin". Then, when a user is granted "admin" privilege, she is automatically granted all the child privileges that the privilege contains. The OpenACS 5.2.3rc1 kernel data model defines these privileges:

#
begin
 acs_privilege.create_privilege('read');
 acs_privilege.create_privilege('write');
 acs_privilege.create_privilege('create');
 acs_privilege.create_privilege('delete');
 acs_privilege.create_privilege('admin');

 acs_privilege.add_child('admin', 'read');
 acs_privilege.add_child('admin', 'write');
 acs_privilege.add_child('admin', 'create');
 acs_privilege.add_child('admin', 'delete');

 commit;
end;

Note that a user does not gain admin privileges when granted read, write, create and delete privileges, because some operations explicitly require admin privileges. No substitutions.

To give a user permission to perform a particular operation on a particular object you call acs_permission.grant_permission like this:

# sql code
    acs_permission.grant_permission (
      object_id => some_object_id,
      grantee_id => some_party_id,
      privilege => 'some_privilege_name'
      );

Using just these mechanisms is enough for developers and administrators to effectively define access control for every object in a system.

Explicitly defining permissions to every object individually would become very tedious. OpenACS provides a object contexts as a means for controlling permissions of a large group of objects at the same time.

In OpenACS 5.2.3rc1, object context is a scoping mechanism. "Scoping" and "scope" are terms best explained by example: consider some hypothetical rows in the address_book table:

... scope user_id group_id ...
... user 123 ...
... group 456 ...
... public ...

The first row represents an entry in User 123's personal address book, the second row represents an entry in User Group 456's shared address book, and the third row represents an entry in the site's public address book. In this way, the scoping columns identify the security context in which a given object belongs, where each context is either a person or a group of people or the general public (itself a group of people).

Every object lives in a single context. A context is just an another object that represents the security domain to which the object belongs. By convention, if an object A does not have any permissions explicitly attached to it, then the system will look at the context_id column in acs_objects and check the context object there for permissions. Two things control the scope of this search:

  1. the structure of the context hierarchy itself, and

  2. the value of the security_inherit_p flag in each object.

If security_inherit_p flag is set to 't', then the automatic search through the context happens, otherwise it does not. You might set this field to 'f' if you want to override the default permissions in a subtree of some context.

For an example of how to use context hierarchy, consider the forums application. With only row-level permissions it is not obvious how to reasonably initialize the access control list when creating a message. At best, we have to explicitly grant various read and write privileges whenever we create a message, which is tedious. A reasonable thing to do is to create an object representing a forum, and point the context_id field of a new message at the forum. Then, suppose we grant every user in the system read-access to this forum. By default, they will automatically have read-access to the new message we just inserted, since the system automatically checks permissions on the message's context. To allow the creator of the message to change the message after it has been posted we grant the user write-access on the message, and we are done.

This mechanism allows developers and administrators to define a hierarchy that matches the structure they need for access control in their application. The following picture shows a typical context hierarchy for a hypothetical site:

The top two contexts in the diagram are called "magic" numbers, because in some sense, they are created by default by OpenACS for a specific purpose. The object default_context represents the root of the context hierarchy for the entire site. All permission searches walk up the tree to this point and then stop. If you grant permissions on this object, then by default those permissions will hold for every object in the system, regardless of which subsite they happen to live in. The object security_context_root has a slightly different role. If some object has no permissions attached to it, and its value for security_inherit_p is 'f', or context_id is null, this context is used by default.

See the package developer tutorials for examples on how to use permissions code.

OpenACS 5.2.3rc1 defines three separate mechanisms for specifying access control in applications.

  1. The Groups data model allows you to define hierarchical organizations of users and groups of users.

  2. The Permissions data model allows you to define a hierarchy of user rights.

  3. The Context hierarchy allows you to define organize default permissions in a hierarchical fashion.

A PL/SQL or Tcl API is then used to check permissions in application pages.

Backup Strategy

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

The purpose of backup is to enable recovery. Backup and recovery are always risky; here are some steps that minimize the chance recovery is necessary:

  • Store everything on a fault-tolerant disk array (RAID 1 or 5 or better).

  • Use battery backup.

  • Use more reliable hardware, such as SCSI instead of IDE.

These steps improve the chances of successful recovery:

  • Store backups on a third disk on another controller

  • Store backups on a different computer on a different network in a different physical location. (Compared to off-line backup such as tapes and CDRs, on-line backup is faster and more likely to succeed, but requires maintenance of another machine.)

  • Plan and configure for recovery from the beginning.

  • Test your recovery strategy from time to time.

  • Make it easy to maintain and test your recovery strategy, so that you are more likely to do it.

OpenACS installations comprise files and database contents. If you follow the reference install and put all files, including configuration files, in /var/lib/aolserver/$OPENACS_SERVICE_NAME/, and back up the database nightly to a file in /var/lib/aolserver/$OPENACS_SERVICE_NAME/database-backup, then you can apply standard file-based backup strategies to /var/lib/aolserver/$OPENACS_SERVICE_NAME

The OpenACS Database Access API

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Pete Su and Jon Salz. Modified by Roberto Mello.

One of OpenACS's great strengths is that code written for it is very close to the database. It is very easy to interact with the database from anywhere within OpenACS, and we have a coherent API for database access which makes this even easier.

More detailed information about the DB api is available at Database Access API.

The OpenACS database API is meant to save developers from making common mistakes and to provide a more structured syntax for specifying database operations, including transactions. Here's an example of the API.

set count 0
set tcl_var "foo"
set sql {
      	SELECT foo, bar, baz
       FROM some_table, some_other_table
       WHERE some_table.id = some_other_table.id
         and some_table.condition_p = :tcl_var
}

db_transaction {
    db_foreach my_example_query_name $sql {
        lappend rows [list $foo $bar $baz]
        incr count
    }
    foreach row $rows {
        call_some_proc $foo $bar $baz
    }
}

There are several things to note here:

  1. No explicit code for grabbing and releasing handles. Usage of the Database API implicitly deals with all handle management issues.

  2. The db_transaction command makes the scope of a transaction clear; db_transaction takes the code block argument and automatically runs it in the context of a transaction. If you use something like db_foreach though, you need to make sure that there are no calls in the code block which would take a second db handle since the transaction is only valid for one handle (thats why we build up a list of returned values and call a second proc outside the db_foreach loop).

  3. The command db_foreach writes our old while loop for us.

  4. Every SQL query has a name, which is used in conjunction with .XQL files to support multiple databases.

  5. Finally and most importantly, there API implements bind variables, which we will cover next.

Bind variables are placeholders for literal values in an SQL query being sent to the server. In the old way, data was generally passed to directly to the DB backend, via Tcl string interpolation. In the example above, the query would look like:

select foo, bar, baz
from some_table, some_other_table
where some_table.id=some_other_table.id
and some_table.condition_p = '$foo'

There are a few problems with this:

  1. If the value of $foo is a huge string, then we waste a lot of time in the database server doing useless parsing.

  2. Second, if the literal value contains characters like single quotes, we have to be careful to properly escape them, because not quoting them will lead to surprising errors.

  3. Third, no type checking occurs on the literal value. Finally, if the Tcl variable is passed in or between web forms or otherwise subject to external modification, there is nothing keeping malicious users from setting the Tcl variable to some string that changes the query textually. This type of attack, called SQL smuggling, can be very damaging - entire tables can be exposed or have their contents deleted, for example.

    Another very important reason for using bind variables is performance. Oracle can cache previously parsed queries. If there are values in the where clause, that is how the query is cached. It also performs bind variable susbstitution after parsing the SQL statement. This means that SQL statements that use bind variables will always match (assuming all else is the same) while SQL statements that do not use bind variables will not match unless the values in the statement are exactly the same. This will improve the query cache considerably, which can make the server much more efficient.

What the DB API (in conjuntion with the database drivers implemented for aolserver) do is send the SQL statement to the server for parsing, then bind values to the variables and sends those values along seperately as a second step. This seperate binding step is where the term bind variable comes from.

This split has several advantages. First, type checking happens on the literal. If the column we are comparing against holds numbers, and we send a string, we get a nice error. Second, since string literals are no longer in the query, no extra quoting is required. Third, substitution of bind variables cannot change the actual text of the query, only the literal values in the placeholders. The database API makes bind variables easy to use by hooking them smoothly into the Tcl runtime so you simply provide :tclvar and the value of $tclvar is sent to the backend to actually execute the query.

The database API parses the query and pulls out all the bind variable specifications and replaces them with generic placeholders. It then automatically pulls the values of the named Tcl vars out of the runtime environment of the script, and passes them to the database.

Note that while this looks like a simple syntactic change, it really is very different from how interpolated text queries work. You use bind variables to replace what would otherwise be a literal value in a query, and Tcl style string interpolation does not happen. So you cannot do something like:

set table "baz"
set condition "where foo = bar"

db_foreach my_query { select :table from some_table where :condition }
    

SQL will not allow a literal to occur where we've put the bind variables, so the query is syntactically incorrect. You have to remember that while the bind variable syntax looks similar to variable interpolation in Tcl, It is not the same thing at all.

Finally, the DB API has several different styles for passing bind variable values to queries. In general, use the style presented here because it is the most convenient.

Every db_* command accepting a SQL command as an argument supports bind variables. You can either

  • Specify the -bind switch to provide a set with bind variable values, or

  • Specify the -bind switch to explicitly provide a list of bind variable names and values, or

  • Not specify a bind variable list at all, in which case Tcl variables are used as bind variables.

The default behavior (i.e., if the -bind switch is omitted) is that these procedures expect to find local variables that correspond in name to the referenced bind variables, e.g.:

set user_id 123456
set role "administrator"

db_foreach user_group_memberships_by_role {
    select g.group_id, g.group_name
    from user_groups g, user_group_map map
    where g.group_id = map.user_id
    and map.user_id = :user_id
    and map.role = :role
} {
    # do something for each group of which user 123456 is in the role
    # of "administrator"
}

      

The value of the local Tcl variable user_id (123456) is bound to the user_id bind variable.

The -bind switch can takes the name of an ns_set containing keys for each bind variable named in the query, e.g.:

set bind_vars [ns_set create]
ns_set put $bind_vars user_id 123456
ns_set put $bind_vars role "administrator"

db_foreach user_group_memberships_by_role {
    select g.group_id, g.group_name
    from user_groups g, user_group_map map
    where g.group_id = map.user_id
    and map.user_id = :user_id
    and map.role = :role
} -bind $bind_vars {
    # do something for each group in which user 123456 has the role
    # of "administrator"
}

      

Alternatively, as an argument to -bind you can specify a list of alternating name/value pairs for bind variables:

	
db_foreach user_group_memberships_by_role {
    select g.group_id, g.group_name
    from user_groups g, user_group_map map
    where g.group_id = map.user_id
    and map.user_id = :user_id
    and map.role = :role
} -bind [list user_id 123456 role "administrator"] {
    # do something for each group in which user 123456 has the role
    # of "administrator"
}

      

When processing a DML statement, Oracle coerces empty strings into null. (This coercion does not occur in the WHERE clause of a query, i.e. col = '' and col is null are not equivalent.)

As a result, when using bind variables, the only way to make Oracle set a column value to null is to set the corresponding bind variable to the empty string, since a bind variable whose value is the string "null" will be interpreted as the literal string "null".

These Oracle quirks complicate the process of writing clear and abstract DML difficult. Here is an example that illustrates why:

#
# Given the table:
#
#   create table foo (
#           bar        integer,
#           baz        varchar(10)
#   );
#

set bar ""
set baz ""

db_dml foo_create "insert into foo(bar, baz) values(:bar, :baz)"
#
# the values of the "bar" and "baz" columns in the new row are both
# null, because Oracle has coerced the empty string (even for the
# numeric column "bar") into null in both cases

      

Since databases other than Oracle do not coerce empty strings into null, this code has different semantics depending on the underlying database (i.e., the row that gets inserted may not have null as its column values), which defeats the purpose of SQL abstraction.

Therefore, the Database Access API provides a database-independent way to represent null (instead of the Oracle-specific idiom of the empty string): db_null.

Use it instead of the empty string whenever you want to set a column value explicitly to null, e.g.:

set bar [db_null]
set baz [db_null]

db_dml foo_create "insert into foo(bar, baz) values(:bar, :baz)"
#
# sets the values for both the "bar" and "baz" columns to null

The database library can transparently maintain pools of sequence values, so that each request for a new sequence value (using db_nextval) does not incur a roundtrip to the server. For instance, this functionality is very useful in the security/sessions library, which very frequently allocates values from the sec_id_seq sequence. To utilize this functionality for a particular sequence, register the sequence to be pooled, either using the db_register_pooled_sequence procedure at server startup time, or by including a configuration parameter of the form

PoolSequence.sequence_name_seq=count

in any configuration section in the yourservername.ini file, e.g.,

[ns/server/yourservername/acs/security]
PoolSequence.sec_id_seq=20

    

The database library will allocate this number of sequence values at server startup. It will periodically scan pools and allocate new values for sequences which are less than half-full. (This normally occurs every 60 seconds, and is configurable via the PooledSequenceUpdateInterval parameter in the [ns/server/yourservername/acs/database] configuration section.)

The Database API has several functions that wrap familiar parts of the AOLserver database API.

Note that you never have to use ns_db anymore (including ns_db gethandle)! Just start doing stuff, and (if you want) call db_release_unused_handles when you're done as a hint to release the database handle.

db_abort_transaction
db_abort_transaction
	  

Aborts all levels of a transaction. That is if this is called within several nested transactions, all of them are terminated. Use this insetead of db_dml "abort" "abort transaction".

db_multirow
db_multirow [ -local ] [ -append ] [ -extend column_list ]  var-namestatement-namesql  [ -bind bind_set_id | -bind bind_value_list ]  code_block [ if_no_rows if_no_rows_block ]

Performs the SQL query sql, saving results in variables of the form var_name:1, var_name:2, etc, setting var_name:rowcount to the total number of rows, and setting var_name:columns to a list of column names.

Each row also has a column, rownum, automatically added and set to the row number, starting with 1. Note that this will override any column in the SQL statement named 'rownum', also if you're using the Oracle rownum pseudo-column.

If the -local is passed, the variables defined by db_multirow will be set locally (useful if you're compiling dynamic templates in a function or similar situations).

You may supply a code block, which will be executed for each row in the loop. This is very useful if you need to make computations that are better done in Tcl than in SQL, for example using ns_urlencode or ad_quotehtml, etc. When the Tcl code is executed, all the columns from the SQL query will be set as local variables in that code. Any changes made to these local variables will be copied back into the multirow.

You may also add additional, computed columns to the multirow, using the -extend { col_1col_2 ... } switch. This is useful for things like constructing a URL for the object retrieved by the query.

If you're constructing your multirow through multiple queries with the same set of columns, but with different rows, you can use the -append switch. This causes the rows returned by this query to be appended to the rows already in the multirow, instead of starting a clean multirow, as is the normal behavior. The columns must match the columns in the original multirow, or an error will be thrown.

Your code block may call continue in order to skip a row and not include it in the multirow. Or you can call break to skip this row and quit looping.

Notice the nonstandard numbering (everything else in Tcl starts at 0); the reason is that the graphics designer, a non programmer, may wish to work with row numbers.

Example:

db_multirow -extend { user_url } users users_query {
    select user_id first_names, last_name, email from cc_users
} {
    set user_url [acs_community_member_url -user_id $user_id]
}
    

You can also iterate over a multirow after it has been created - check the documentation for template::multirow

For example,

db_multirow assets assets {
  select asset_id,
    from ...
}

..

set asset_id_l [list]
multirow foreach assets {
  lappend asset_id_l $asset_id
}
          

Technically it's equivalent to using a code block on the end of your db_multirow.

db_null
db_null

Returns a value which can be used in a bind variable to represent the SQL value null. See Nulls and Bind Variables above.

db_foreach
db_foreach statement-name sql [ -bind bind_set_id | -bind bind_value_list ]  [ -column_array array_name | -column_set set_name ]  code_block [ if_no_rows if_no_rows_block ]

Performs the SQL query sql, executing code_block once for each row with variables set to column values (or a set or array populated if -column_array or column_set is specified). If the query returns no rows, executes if_no_rows_block (if provided).

Example:

db_foreach select_foo "select foo, bar from greeble" {
    doc_body_append "<li>foo=$foo; bar=$bar\n"
} if_no_rows {
    doc_body_append "<li>There are no greebles in the database.\n"
}

	  

The code block may contain break statements (which terminate the loop and flush the database handle) and continue statements (which continue to the next row of the loop).

db_1row
db_1row statement-namesql [ -bind bind_set_id | -bind bind_value_list ]  [ -column_array array_name | -column_set set_name ]
	  

Performs the SQL query sql, setting variables to column values. Raises an error if the query does not return exactly 1 row.

Example:

db_1row select_foo "select foo, bar from greeble where greeble_id = $greeble_id"
# Bombs if there's no such greeble!
# Now $foo and $bar are set.

	  
db_0or1row
db_0or1row statement-namesql [ -bind bind_set_id | -bind bind_value_list ]  [ -column_array array_name | -column_set set_name ]
	  

Performs the SQL query sql. If a row is returned, sets variables to column values and returns 1. If no rows are returned, returns 0. If more than one row is returned, throws an error.

db_nextval
db_nextval sequence-name

Returns the next value for the sequence sequence-name (using a SQL statement like SELECTsequence-name.nextval FROM DUAL). If sequence pooling is enabled for the sequence, transparently uses a value from the pool if available to save a round-trip to the database (see Sequence Pooling).

db_register_pooled_sequence
db_register_pooled_sequence sequence-namepool-size

Registers the sequence sequence-name to be pooled, with a pool size of pool-size sequence values (see Sequence Pooling).

db_string
db_string statement-namesql [ -default default ] [ -bind bind_set_id | -bind bind_value_list ]
	  

Returns the first column of the result of SQL query sql. If sql doesn't return a row, returns default (or throws an error if default is unspecified). Analogous to database_to_tcl_string and database_to_tcl_string_or_null.

db_list
db_list statement-namesql [ -bind bind_set_id | -bind bind_value_list ]
	  

Returns a Tcl list of the values in the first column of the result of SQL query sql. If sql doesn't return any rows, returns an empty list. Analogous to database_to_tcl_list.

db_list_of_lists
db_list_of_lists statement-namesql [ -bind bind_set_id | -bind bind_value_list ]
	  

Returns a Tcl list, each element of which is a list of all column values in a row of the result of SQL query sql. If sql doesn't return any rows, returns an empty list. (Analogous to database_to_tcl_list_list.)

db_dml
db_dml statement-namesql  [ -bind bind_set_id | -bind bind_value_list ]  [ -blobs blob_list | -clobs clob_list |
      -blob_files blob_file_list | -clob_files clob_file_list ]
	  

Performs the DML or DDL statement sql.

If a length-n list of blobs or clobs is provided, then the SQL should return n blobs or clobs into the bind variables :1, :2, ... :n. blobs or clobs, if specified, should be a list of individual BLOBs or CLOBs to insert; blob_files or clob_files, if specified, should be a list of paths to files containing the data to insert. Only one of -blobs, -clobs, -blob_files, and -clob_files may be provided.

Example:

db_dml insert_photos "
        insert photos(photo_id, image, thumbnail_image)
        values(photo_id_seq.nextval, empty_blob(), empty_blob())
        returning image, thumbnail_image into :1, :2
    "  -blob_files [list "/var/tmp/the_photo" "/var/tmp/the_thumbnail"]

	  

This inserts a new row into the photos table, with the contents of the files /var/tmp/the_photo and /var/tmp/the_thumbnail in the image and thumbnail columns, respectively.

db_write_clob, db_write_blob, db_blob_get_file
db_write_clob statement-namesql [ -bind bind_set_id | -bind bind_value_list ]

db_write_blob statement-namesql [ -bind bind_set_id | -bind bind_value_list ]

db_blob_get_file statement-namesql [ -bind bind_set_id | -bind bind_value_list ]
	  

Analagous to ns_ora write_clob/write_blob/blob_get_file.

db_release_unused_handles
	    db_release_unused_handles
	  

Releases any allocated, unused database handles.

db_transaction
db_transaction code_block [ on_error { code_block } ]
	  

Executes code_block transactionally. Nested transactions are supported (end transaction is transparently ns_db dml'ed when the outermost transaction completes). The db_abort_transaction command can be used to abort all levels of transactions. It is possible to specify an optional on_error code block that will be executed if some code in code_block throws an exception. The variable errmsg will be bound in that scope. If there is no on_error code, any errors will be propagated.

Example:

proc replace_the_foo { col } {
    db_transaction {
        db_dml "delete from foo"
        db_dml "insert into foo(col) values($col)"
    }
}

proc print_the_foo {} {
    doc_body_append "foo is [db_string "select col from foo"]<br>\n"
}

replace_the_foo 8
print_the_foo ; # Writes out "foo is 8"

db_transaction {
    replace_the_foo 14
    print_the_foo ; # Writes out "foo is 14"
    db_dml "insert into some_other_table(col) values(999)"
    ...
    db_abort_transaction
} on_error {
    doc_body_append "Error in transaction: $errmsg"
}


print_the_foo ; # Writes out "foo is 8"

	  
db_resultrows
db_resultrows
	  

Returns the number of rows affected or returned by the previous statement.

db_with_handle
db_with_handle varcode_block

Places a database handle into the variable var and executes code_block. This is useful when you don't want to have to use the new API (db_foreach, db_1row, etc.), but need to use database handles explicitly.

Example:

proc lookup_the_foo { foo } {
    db_with_handle db {
        return [db_string unused "select ..."]
    }
}

db_with_handle db {
    # Now there's a database handle in $db.
    set selection [ns_db select $db "select foo from bar"]
    while { [ns_db getrow $db $selection] } {
        set_variables_after_query

        lookup_the_foo $foo
    }
}

	  
db_nullify_empty_string
db_nullify_empty_string string

For true SQL purists, we provide the convenience function db_nullify_empty_string, which returns [db_null] if its string argument is the empty string and can be used to encapsulate another Oracle quirk:

set baz ""

# Clean out the foo table
#
db_dml unused "delete from foo"

db_dml unused "insert into foo(baz) values('$baz')"

set n_rows [db_string unused "select count(*) from foo where baz is null"]
#
# $n_rows is 1; in effect, the "baz is null" criterion is matching
# the empty string we just inserted (because of Oracle's coercion
# quirk)

	  

To balance out this asymmetry, you can explicitly set baz to null by writing:

db_dml foo_insert "insert into foo(baz) values(:1)" {[db_nullify_empty_string $baz]}

	  

OpenACS Edit This Page Templates

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

by Nick Carroll

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.
  • Learn about the OpenACS templating system.

  • Learn about subsites and site-map administration.

The OpenACS templating system allows you to give your site a consistent look and feel. It also promotes code maintainability in the presentation layer, by allowing presentation components to be reused across multiple pages. If you need to change the layout for some reason, then you only need to make that change in one location, instead of across many files.

In this problem set you will familiarise yourself with the templating system in openacs. This will be achieved through customising an existing edit-this-page application template.

Before proceeding, it is strongly advised to read the templating documentation on your openacs installation (http://localhost:8000/doc/acs-templating). The documentation lists the special tags available for ADP files.

  • Create a subsite called pset3.

  • A subsite is simply a directory or subdirectory mounted at the end of your domain name. This can be done in one of two places:

    • http://localhost:8000/admin/site-map

    • or the subsite admin form on the main site, which is available when you login to your OpenACS installation.

  • Checkout ETP from CVS:

    cd ~/openacs/packages
                cvs -d:pserver:anonymous@openacs.org:/cvsroot login
                cvs -d:pserver:anonymous@openacs.org:/cvsroot co edit-this-page
  • Go to the package manager at http://yoursite/acs-admin/apm. And install the new package: edit-this-page.

  • Or use the "Add Application" form available on the Main site.

  • Work out how to change the ETP application.

  • Investigate each of the available ETP templates:

    • Default

    • News

    • FAQ

  • Browse the files for each of the above ETP templates at:

    cd ~/openacs/packages/edit-this-page/templates
  • Use the article template as the basis of our new col2 template.

    cp article-content.adp col2-content.adp
                cp article-content.tcl col2-content.tcl
                cp article-index.adp col2-index.adp
                cp article-index.tcl col2-index.tcl
  • The template should provide us with the following ETP layout:

    Table10.1.table showing ETP layout

    Header
    Sidebar Main Content Pane

  • The "Main Content" pane should contain the editable content that ETP provides.

  • The "Header" should display the title of the page that you set in ETP.

  • The "Sidebar" should display the extlinks that you add as a content item in ETP.

  • Need to register your template with ETP so that it appears in the drop-down menu that you would have seen in Exercise 3.

    cd ~/openacs/packages/edit-this-page/tcl
                emacs etp-custom-init.tcl
  • Use the function etp::define_application to register your template with ETP

    • Uncomment the "asc" definition

    • Set allow_extlinks to true, the rest should be false.

  • Restart your server for the changes to take effect.

  • Configure your ETP instance at /lab4/index to use the col2 template.

  • Create external links to link to other mounted ETP instances.

  • Check that your external links show up in the sidebar when you view your ETP application using the col2 template.

This problem set was originally written by Nick Carroll in August 2004 for the University of Sydney Course EBUS5002.

This material is copyright 2004 by Nick Carroll. It may be copied, reused, and modified, provided credit is given to the original author.

Using nXML mode in Emacs

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Jeff Davis

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

An alternative to psgml mode is nXML by James Clark, a new major mode for GNU Emacs for editing XML, and which features highlighting, indentation, and on the fly validation versus a RelaxNG Schema.

Resources

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

Here are some resources that OpenACS users have found useful.

The Request Processor

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

By Pete Su

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

This document is a brief introduction to the OpenACS 5.2.3rc1 Request Processor; more details can be found in the OpenACS 4 Request Processor Design. Here we cover the high level concepts behind the system, and implications and usage for the application developer.

The 5.2.3rc1 Request Processor is a global filter and set of Tcl procs that respond to every incoming URL reaching the server. The following diagram summarizes the stages of the request processor assuming a URL request like http://someserver.com/notes/somepage.adp.

Stage 1: Search Site Map

The first thing the RP does is to map the given URL to the appropriate physical directory in the filesystem, from which to serve content. We do this by searching the site map data model (touched on in the Packages, and further discussed in Writing OpenACS Application Pages). This data model maps URLs to objects representing content, and these objects are typically package instances.

After looking up the appropriate object, the RP stores the URL, the ID of the object it found, and the package and package instance the object belongs to into the environment of the connection. This environment can be queried using the ad_conn procedure, which is described in detail in OpenACS 4 Request Processor Design. The page development tutorial shows you how to use this interface to make your pages aware of which instance was requested.

Stage 2: Authentication

Next, the Request Processor examines the request for session information. Session information is generally sent from the client (the user's browser) to the server via cookies. The security/session handler is described in detail in its own document. It examines the client request and either extracts or sets up new session tokens for the user.

Stage 3: Authorization

Next, the Request Processor checks if the user has appropriate access privileges to the requested part of the site. In OpenACS 5.2.3rc1, access control is dictated by the permissions system. In this case, the RP checks if the user has "read" priviledges on the object in the site map specified by the URL. This object is typically a package instance, but it could easily be something more granular, such as whehter the user can view a particular piece of content within a package instance. This automatic check makes it easy to set up sites with areas that are only accessible to specific groups of users.

Stage 4: URL Processing, File Search

Finally, the Request Processor finds the file we intend to serve, searching the filesystem to locate the actual file that corresponds to an abstract URL. It searches for files with predefined "magic" extensions, i.e. files that end with: .html, .tcl and .adp.

If the RP can't find any matching files with the expected extensions, it will look for virtual-url-handler files, or .vuh files. A .vuh file will be executed as if it were a Tcl file, but with the tail end of the URL removed. This allows the code in the .vuh file to act like a registered procedure for an entire subtree of the URL namespace. Thus a .vuh file can be thought of as a replacement for filters and registered procs, except that they integrate cleanly and correctly with the RP's URL mapping mechanisms. The details of how to use these files are described in OpenACS 4 Request Processor Design.

Once the appropriate file is found, it is either served directly if it's static content, or sent to the template system or the standard Tcl interpreter if it's a dynamic page.

Once the flow of control reaches a dynamic page, the Request Processor has populated the environment of the request with several pieces of useful information. The RP's environment is accessible through the ad_conn interface, and the following calls should be useful to you when developing dynamic pages:

[ad_conn user_id]

The ID of the user associated with this request. By convention this is zero if there is no user.

[ad_conn session_id]

The ID of the session associated with this request.

[ad_conn url]

The URL associated with the request.

[ad_conn urlv]

The URL associated with the request, represented as a list instead of a single string.

[ad_conn file]

The actual local filesystem path of the file that is being served.

[ad_conn object_url]

If the URL refers to a site map object, this is the URL to the root of the tree where the object is mounted.

[ad_conn package_url]

If the URL refers to a package instance, this is the URL to the root of the tree where the package is mounted.

[ad_conn extra_url]

If we found the URL in the site map, this is the tail of the URL following the part that matched a site map entry.

[ad_conn object_id]

If the URL refers to a site map object, this is the ID of that object.

[ad_conn package_id]

If the URL refers to a package instance, this is the ID of that package instance.

[ad_conn package_key]

If the URL refers to a package instance, this is the unique key name of the package.

[ad_conn path_info]

In a .vuh file, path_info is the trailing part of the URL not matched by the .vuh file.

Install tclwebtest.

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

Download the tclwebtest source, unpack it, and put it an appropriate place. (tclwebtest 1.0 will be required for auto-tests in OpenACS 5.1. When it exists, the cvs command here will be replaced with http://prdownloads.sourceforge.net/tclwebtest/tclwebtest-0.3.tar.gz?download.) As root:

cd /tmp
cvs -z3 -d:pserver:anonymous@cvs.sourceforge.net:/cvsroot/tclwebtest co tclwebtest
#wget http://umn.dl.sourceforge.net/sourceforge/tclwebtest/tclwebtest-1.0.tar.gz
#tar xvzf tclwebtest-1-0.tar.gz
mv tclwebtest-0.3 /usr/local/
ln -s /usr/local/tclwebtest-0.3 /usr/local/tclwebtest
ln -s /usr/local/tclwebtest/tclwebtest /usr/local/bin

Installing SSL Support for an OpenACS service

Created by Gustaf Neumann, last modified by Gustaf Neumann 17 Feb 2008, at 07:08 AM

Debian Users: apt-get install openssl before proceeding.

  1. Make sure nsopenssl.so is installed for AOLserver.

  2. Uncomment this line from config.tcl.

    #ns_param   nsopenssl       ${bindir}/nsopenssl.so
    
  3. Prepare a certificate directory for the service.

    [$OPENACS_SERVICE_NAME etc]$ mkdir /var/lib/aolserver/$OPENACS_SERVICE_NAME/etc/certs
    [$OPENACS_SERVICE_NAME etc]$ chmod 700 /var/lib/aolserver/$OPENACS_SERVICE_NAME/etc/certs
    [$OPENACS_SERVICE_NAME etc]$
    mkdir /var/lib/aolserver/$OPENACS_SERVICE_NAME/etc/certs
    chmod 700 /var/lib/aolserver/$OPENACS_SERVICE_NAME/etc/certs
    
  4. It takes two files to support an SSL connection. The certificate is the public half of the key pair - the server sends the certificate to browser requesting ssl. The key is the private half of the key pair. In addition, the certificate must be signed by Certificate Authority or browsers will protest. Each web browser ships with a built-in list of acceptable Certificate Authorities (CAs) and their keys. Only a site certificate signed by a known and approved CA will work smoothly. Any other certificate will cause browsers to produce some messages or block the site. Unfortunately, getting a site certificate signed by a CA costs money. In this section, we'll generate an unsigned certificate which will work in most browsers, albeit with pop-up messages.

    Use an OpenSSL perl script to generate a certificate and key.

    Debian users: use /usr/lib/ssl/misc/CA.pl instead of /usr/share/ssl/CA

    Mac OS X users: use perl /System/Library/OpenSSL/misc/CA.pl -newcert instead of /usr/share/ssl/CA

    [$OPENACS_SERVICE_NAME $OPENACS_SERVICE_NAME]$ cd /var/lib/aolserver/$OPENACS_SERVICE_NAME/etc/certs
    [$OPENACS_SERVICE_NAME certs]$ perl /usr/share/ssl/misc/CA -newcert
    Using configuration from /usr/share/ssl/openssl.cnf
    Generating a 1024 bit RSA private key
    ...++++++
    .......++++++
    writing new private key to 'newreq.pem'
    Enter PEM pass phrase:

    Enter a pass phrase for the CA certificate. Then, answer the rest of the questions. At the end you should see this:

    Certificate (and private key) is in newreq.pem
    [$OPENACS_SERVICE_NAME certs]$

    newreq.pem contains our certificate and private key. The key is protected by a passphrase, which means that we'll have to enter the pass phrase each time the server starts. This is impractical and unnecessary, so we create an unprotected version of the key. Security implication: if anyone gets access to the file keyfile.pem, they effectively own the key as much as you do. Mitigation: don't use this key/cert combo for anything besides providing ssl for the web site.

    [root misc]# openssl rsa -in newreq.pem -out keyfile.pem
    read RSA key
    Enter PEM pass phrase:
    writing RSA key
    [$OPENACS_SERVICE_NAME certs]$ 

    To create the certificate file, we take the combined file, copy it, and strip out the key.

    [$OPENACS_SERVICE_NAME certs]$ cp newreq.pem certfile.pem
    [root misc]# emacs certfile.pem
    

    Strip out the section that looks like

    -----BEGIN RSA PRIVATE KEY-----
    Proc-Type: 4,ENCRYPTED
    DEK-Info: DES-EDE3-CBC,F3EDE7CA1B404997
    S/Sd2MYA0JVmQuIt5bYowXR1KYKDka1d3DUgtoVTiFepIRUrMkZlCli08mWVjE6T
    (11 lines omitted)
    1MU24SHLgdTfDJprEdxZOnxajnbxL420xNVc5RRXlJA8Xxhx/HBKTw==
    -----END RSA PRIVATE KEY-----
  5. If you start up using the etc/daemontools/run script, you will need to edit this script to make sure the ports are bound for SSL. Details of this are in the run script.

Next Page