View · Index

Weblog

Filtered by category Cookbook, 1 - 10 of 35 Postings (all, summary)

Fresh install of OpenACS 5.10 on Oracle 19c

Created by Gustaf Neumann, last modified by Gustaf Neumann 27 Nov 2023, at 05:48 PM

Here is a short summary of my steps to get the version of OpenACS from the branch oacs-5-10 installed on Oracle 19c. It seems there were since a while no fresh installations of OpenACS on Oracle, so several compatibility fixes were necessary in the oacs-5-10 branch to get things running. The branch oacs-5-10 will be released in the future as OpenACS 5.10.1.

Below are the steps to install Oracle on a virtual machine and to configure it such it works with OpenACS. There are probably many more ways to achieve similar tasks, but since the installation is not as trivial was with PostgreSQL, someone might find these notes useful. My installation was on macOS 11.6.3, and should work very similar on, e.g., Linux systems.

The currently recommended version by Oracle is 19c which is the Long Term Release with a support end date of April 30, 2027.

Installing a virtual machine with Oracle Linux and the Oracle Database

Preliminaries:

  • install vagrant
  • install VirtualBox
  • git clone Oracle's vagrant-projects
  • for installing Oracle 19c, change to vagrant-projects/OracleDatabase/19.3.0 and read the instructions. These instructions tell you to install the Oracle installation zip file for Linux in this directory. Without the proper Oracle database installation for your architecture, the "vagrant up" command below will fail. The Oracle version for private use is free to use.

Now, run in the vagrant-projects/OracleDatabase/19.3.0 directory the command

vagrant up

This command will install "Oracle Linux Server release 7.9" which is a flavor of Red Hat Enterprise Linux (RHEL) branded by Oracle. The command will run for a couple of minutes. Once the step has finished, you can use ssh to log into this machine

vagrant ssh

On this machine, you will find the Oracle server already running. To ease access to the database and the oracle utilities, add the following lines to the .bash_profile (of the vagrant user)

# User specific environment and startup programs
export ORACLE_HOME=/opt/oracle/product/19c/dbhome_1
PATH=$PATH:$HOME/.local/bin:$ORACLE_HOME/bin:$HOME/bin

To make the shared libraries easy accessible, add these to the standard paths (this will be needed e.g. for the compilation of the Oracle driver for NaviServer).

sudo sh -c "echo /opt/oracle/product/19c/dbhome_1/instantclient > /etc/ld.so.conf.d/oracle-instantclient.conf"
sudo sh -c "echo /opt/oracle/product/19c/dbhome_1/lib > /etc/ld.so.conf.d/oracle-libs.conf"
sudo ldconfig

You might also wish to change the password of the oracle database administrator IDs. We use here "oracle" as password, which is also used in later examples.

su - oracle -c "bash /home/oracle/setPassword.sh oracle"

Once $ORACLE_HOME/bin is on your path, you should be able to list the "status" and "services" provided by the Oracle database:

lsnrctl status
lsnrctl service

You will see there a container database "ORCLCDB" and a pluggable database "ORCLPDB1" (this is Oracle vocabulary). You should also be able to connect to the Oracle Database server via

sqlplus system/oracle@ORCLPDB1

Configuring an Oracle service for OpenACS

If this is working, Oracle and the VM are already functioning. Now we will create an own "service" for OpenACS from sqlplus:

exec dbms_service.CREATE_SERVICE('pdb1','openacs');  -- service_name and network_name
exec dbms_service.start_service('pdb1');

after leaving sqlplus, you should see the new database via "lsnrctl status". Now, you should be able to connect to the new service and save its state:

sqlplus system/oracle@localhost:1521/openacs
SQL> alter pluggable database save state;

Now, we will create an "openacs" user, a tablespace and grant this user the necessary permissions

CREATE TABLESPACE openacs  DATAFILE 'openacs_data.dbf' SIZE 100m;
CREATE USER openacs IDENTIFIED BY "openacs" DEFAULT TABLESPACE openacs TEMPORARY TABLESPACE temp;
GRANT CREATE SESSION TO openacs;
GRANT ALL PRIVILEGES TO openacs;
GRANT CTXAPP TO OPENACS;
GRANT EXECUTE ON CTXSYS.CTX_DDL TO OPENACS;

Note that the tablespace is set here to 100 MB, which is fine for simple setups for testing etc. But certainly, you might want to alter this value according to your needs. Note that the tablespace "openacs" is the default tablespace of the user "openacs".

Installing NaviServer and OpenACS on the virtual machine

The installation of NaviServer and OpenACS is done here based on the scripts from naviserver-openacs. Now, install git on the vm, get the installer via git, create the user "nsadmin", deactivate PostgreSQL from the default configuration, and install NaviServer with the module "nsoracle".

sudo yum install git
git clone https://github.com/gustafn/install-ns.git
cd install-ns
sudo useradd -g nsadmin nsadmin
sudo with_postgres=0 with_postgres_driver=0 ns_modules=nsoracle ORACLE_HOME=$ORACLE_HOME bash install-ns.sh build

Now, we install OpenACS from the CVS repository. This will OpenACS with oacs-5-10-compat flag set (the latest published version of the oacs-5-10 branch)

sudo bash install-oacs.sh build

Now get the updated configuration file, which has the right name of the driver, environment variables etc. preconfigured from GitHub

openacs-config.tcl

move it e.g. to /usr/local/ns/config-oacs-5-10-0-ora.tcl, and change "ipaddress" to "0.0.0.0", "server" to "oacs-5-10-0", and "database" to "oracle". Now you can start OpenACS in the foreground on the virtual machine using e.g.:

sudo /usr/local/ns/bin/nsd -u nsadmin -f -t /usr/local/ns/config-oacs-5-10-0-ora.tcl 2>&1

The server will be listening on [0.0.0.0]:8000. We are now essentially done. You should consider exporting port 8000 from the guest machine as e.g. port 9000 on the host by adding the line

  config.vm.network "forwarded_port", guest: 8000, host: 9000

in the Vagrantfile (in folder vagrant-projects/OracleDatabase/19.3.0) on the host, such that after a "vagrant halt" and "vagrant up" one can use a browser on the host to connect to OpenACS on the virtual machine.

Enjoy!

Using OpenACS with External Identity Providers

Created by Gustaf Neumann, last modified by Gustaf Neumann 24 Jul 2023, at 02:45 PM

With the forthcoming version, OpenACS 5.10.1 (and the current head version in the oacs-5-10 branch) OpenACS provides support for external identity providers, which can be used in parallel to the existing OpenACS authorities. It is possible, that users can

  • login alternatively via the configured authority and/or via external identity providers, or
  • exclusively over external identity providers

The alternative login requires that the same email address is used for a user in OpenACS and on the identity provider.

Handling of unregistered users

When using alternative logins, the returned user information might match pre-existing OpenACS users, or user information, which is unknown to the system. In such a situation, three scenarios might be possible:

  1. reject the unknown user
  2. create a new OpenACS user based on the information returned from the identity provider.

The default behavior of the implementation is to reject the user. When the optional flag "-create_not_registered_users" is activated for an external provider, such users will be created. When creating users (e.g. in dotlrn), it is also possible to add these automatically to certain dotlrn groups (specified in "create_with_dotlrn_role"). Further behavior might be specified by extending the predefined behavior (see example below).

 

Supported external identity providers for alternative logins

Currently, the following (OAuth 2 based) identity providers are supported

  • Microsoft Identity Platform (based on ID-Tokens) based on Azure and Active Directory
  • GitHub

OpenACS allows defining one or more identity handlers by defining either different login-handler objects for the same identity provider (e.g. for use on different subsites) to by defining login-handler objects for different identity providers (maybe also for the same subsite).

To enable the external identity providers, the package xooauth has to be installed and one or more external identity providers must be configured on the side of the provider service (to allow its usage for the OpenACS instance) and on the OpenACS instance. So, make sure xooauth is installed and mounted on /oauth (the mount path is relevant for the Redirect URI below).

Microsoft Identity Platform

 

  • Configuration on the provider side:
    • Register application via the Azure Portal
    • Set the Redirect URI type to Web and value pointing to the redirect page (e.g. YOURLOCATION/oauth/www/azure-login-handler)
    • In this process you will obtain the "tenant_id", "client_id" and a "client_secret"
    • Microsoft supports its ID token interface in version v1.0 and v2.0. When v2.0 is configured on the OpenACS side (see below), make sure to define in the "Token configuration" the optional claims "upn", "family_name", and "given_name" (latter two only, when allowing automatic account creation).
    • Details: Add sign-in with Microsoft to a web appMicrosoft identity platform ID tokens
       
  • Configuration on OpenACS
    •   Create a login-handler object (e.g. in xooauth/tcl/oauth-init.tcl)
       
      ms::Authorize create ms::azure
          -tenant "..." \
          -client_id "..."
      		
      Potential further parameters:

              -responder_url "..."  (default "/oauth/azure-login-handler")
              -debug
              -create_not_registered_users
              -create_with_dotlrn_role "..."
              -after_successful_login_url "..."
              -login_failure_url "..."
       
    • The interface is based on the same infrastructure as the Microsoft Graph interface of OpenACS.
    • For the full list of parameters, see the online documentation of the ms::Authorize class.


Using GitHub as Identity Provider

 

  • Configuration on the provider side:
    • Register application on GitHub: login on GitHub (with e.g. your ID), goto "Settings", "Developer Settings", "OAuth Apps", "Register a new application"
    • Set the Redirect URI: When registering the application, fill in the value of "Authorization callback URL" to YOURLOCATION/oauth/www/github-login-handler
    •  Details: Authorizing OAuth Apps
       
  • Configuration on OpenACS
    • Create a login-handler object (e.g. in xooauth/tcl/oauth-init.tcl)
       
      xo::oauth::GitHub create ::xo::oauth::github \
          -client_id "..." \
          -client_secret "..."
      
      Potential further parameters:

             -responder_url "..."   (default "/oauth/github-login-handler")
             -debug
             -create_not_registered_users
             -create_with_dotlrn_role "..."
             -after_successful_login_url "..."
             -login_failure_url "..."
    • For the full list of parameters, see the online documentation of the GitHub class.

The parameters "-debug", "-create_not_registered_users" and "-create_with_dotlrn_role ..." are common parameters and control the behavior.

  • When the switch "-debug" is specified, the interface page (e.g. /oauth/github-login-handler) can be used for testing and to see the provided parameters ("claims") returned from the identity provider. In the testing mode, the user is not logged-in. Furthermore, on the public login pages of the OpenACS instance, the external entity is not offered.
     
  • When the switch "-create_not_registered_users" is specified, the users authorized via the external identity provider not existing as users in OpenACS (based on the email address) are automatically created as new OpenACS users (automated account creation). By default, this switch is turned off. 
     
  • The parameter "-create_with_dotlrn_role ..." is useful for DotLRN instances. When it is defined, new users will be created as DotLRN users with the specified role (e.g. "student").

The configuration parameter can be provided when the login-handler objects are created, or these can be provided via the OpenACS configuration file. The parameters are looked up from the configuration file on a path based on the name of the login-handler object. So, with the following login-handler objects are defined

     ::ms::Authorize create ::ms::azure
     ::xo::oauth::GitHub create ::xo::oauth::github

the parameters for these objects can be specified during
creation (.... -client:id "..." ...) or in the
configuration file in the following sections:

      ns/server/[ns_info server]/acs/oauth/ms/azure {
         ns_param tenant "..."
         ns_param client_id "..."
         ...
      }
      ns/server/[ns_info server]/acs/oauth/github {
         ...
      }

the parameters for these objects can be specified during creation (.... -client_id "..." ...) or in the configuration file in the following sections:

       ns/server/[ns_info server]/acs/oauth/ms {
          ...
       }

Testing

For testing, it is recommended to define the login-handler objects with the optional "-debug" flag (see above). Add the login-handler object with the following command in xooauth/tcl/oauth-init.tcl file

      xo::oauth::GitHub create ::xo::oauth::github \
          -debug

This assumes that the GitHub as Identity Provider was defined on GitHub, and at least the parameters "client_id" and "client_secret" are defined in the OpenACS configuration file.

Then restart the server and navigate to YOURLOCATION/oauth/github-login-handler and you will see the option to login via GitHub. By clicking on the link, you will be redirected to GitHub for registering, and then you will be redirected to the interface page showing the claims provided by GitHub, and whether the user_id exists, etc.

When the login-handler objects are created without the "-debug" flag, the login options for all the created login-handler objects are listed on the register/login page of OpenACS (it is possible to omit these via a package parameter set on a subsite).

Example configuration with a custom login handler

Example for xooauth/tcl/oauth-init.tcl:

#
# Potential place for creating login-handler objects for external
# identity providers.
#
# xo::oauth::GitHub create ::xo::oauth::github \
#    -client_id "..." \
#    -client_secret "..."
#
# ms::Authorize create ms::azure \
#    -tenant "..." \
#    -client_id "..."
#

#
# This is a custom login handler, 
# - configured to create unregistered users,
# - using the "tenant" and "client_id" from the configuration file, and
# - adding some site-specific actions after a successful registration
#
ms::Authorize create ms::LoginHandler -create_not_registered_users
ms::LoginHandler object method register_new_user {
    {-first_names}
    {-last_name}
    {-email}
} -returns integer {

    # perform first the "register_new_user" action as usual
    set user_id [next]

    # on successful registrations (no exception during "next") the user_id is returned.
    # Do whatever you want with this user_id, adding it to groups, etc.
    ns_log notice "ms::LoginHandler: new registration of user $email returned user_id $user_id"

    return $user_id
}

 

E-Mail: Outgoing E-Mail

Created by Malte Sussdorff, last modified by Gustaf Neumann 06 Feb 2023, at 04:48 PM

Current documentation for ACS Mail Lite, the standard API for inbound and outbound email is at https://openacs.org/doc/acs-mail-lite/.

Recommendation

The current recommendation for sending outgoing mail is via the NaviServer module nssmtpd, since this handles well large outgoing mails and works as well with >1000 file descriptors open. In OpenACS 5.10, set the package parameter "EmailDeliveryMode" in the acs-mail-lite package to "nssmtpd".

Older Discussion

Outgoing E-Mail at the moment in acs-mail-lite is split in a multitude of procedures, some doubling the efforts of the other. To clean this up, I propose the following (in general):

  • Replace acs-mail-lite::send with acs-mail-lite::complex_send, making it a wrapper for complex_send or the other way round (rename complex_send to send and so on).
  • Only support sending of e-mails via SMTP. Use smtp::sendmessage from TCLLIB for it (as does complex_send).
  • Daveb suggested splitting up complex_send to make it easier to test parts of it. Here are some ideas:
    • Sub procedure to generate "to/cc/bcc/_lists" which are used in the respective TO/CC/BCC header. This will also clean up the sending of individual emails.
    • Have only one call to smtp::sendmessage and one hook for the complex_send callback.
    • Sub-procedure to append file tokens. Not sure if this is useful as we need to do upvar for the tokens and could not do automated tests on them anyway. So I'd not do it.
  • Have only one sweeper (the complex_sweeper) with support for multiple mail sending servers (so you can have multiple mail senders in the cluster instead of only one).
  • Split of incoming email handing into a separate file
  • Delete acs-mail-lite-procs.tcl :-). Just kidding, but deprecate most of the procedures.

Sadly, it is not a straightforward approach, as there is a catch. Using smtp::sendmessage forces us to figure out a new way for setting the bounce-email address header, as the old approach will not work any more (using the SMTP command "FROM" set to the bounce address). Another option is to use the non-standard "Errors-To" Header instead.

Double Click Handling

Created by Gustaf Neumann, last modified by Gustaf Neumann 02 Nov 2022, at 06:06 PM

Many users tend to click twice on links/and forms, causing therefore multiple identical requests, that can kill the performance of a server, especially when these are expensive requests. On one site, we saw users firing up to several hundred identical requests, probably via a “rapid fire” function available for some mice, coming from the gaming world.

There are essentially two generic ways to handle double clicks in OpenACS:

  1. Server-side double click handling: The xotcl-request-monitor has since a long time the feature to block (optionally) multiple identical requests from the same client. The user will receive for subsequent identical requests the message “Repeated Operation, please resubmit”, which as some “educative” value. It has several heuristics built in to allow certain double requests (e.g., for embedded resources).
     
  2. Client-side double click handling: Via HTML markup, one can disable the same requests for a certain time on the client side (i.e., in the browser). For this time, double-clicks are blocked (the button/link is disabled), such that the true double-click, or “rapid fire” functions are mitigated. The request is only one time executed. Potential problems are potential confusion, when a user wants to open the same page in rapid sequence in multiple tabs, windows. Furthermore, repeated requests from bots are not mitigated. This is a new feature which will part of the OpenACS 5.10.1 release.

The client-side double click handling is activated by a content developer for a link prone to this concern by adding the CSS class “prevent-double-click” to the HTML anchor or button. The double click prevention deactivates a button or an anchor element after clicking for a short time (per default for 2s) and ignores in this time window further clicks. The default time window can be specified via the package parameter DefaultPreventDoubleClickTimeoutMs of acs-templating. The time window can be specified for every occurrence via the data element “oacs-timeout”. When a double click occurs, the duplicate click is determined at the client side (in the browser) such that the first request will continue to run and will render its result when finished. Without double click prevention (or with the double click-prevention on the server side), the results of the first request are lost for the client, although these are still computed at the server side.

Usage Example:

    <p>text with a <a class="prevent-double-click" href="/slow.tcl?t=10s">link</a>
    <p>
    <form action="/slow.tcl">
    <input class="btn prevent-double-click" data-oacs-timeout="4000" type="submit" value="Submit">
    ...
    </form>

Setup with docker-s6

Created by Gustaf Neumann Vlad V, last modified by Vlad V 22 Sep 2022, at 11:20 PM

Basic installation

A quick way to install and test-drive OpenACS is via docker. The s6 docker images were developed by Vlad and use the docker overlay structures as supported by docker-compose, which is a tool for defining and running multi-container Docker applications.

The minimal setup for testing is to get the repository

git clone https://github.com/oupfiz5/openacs-s6
cd openacs-s6/src

and to run it there, e.g.

docker compose up

This command installs OpenACS (oacs-5-10) from GitHub, PostgreSQL (14.1) and NaviServer 4.99.23 and starts the server on port 8080.  We are planning to merge and probably move this repository to the OpenACS project on GitHub.

More advanced installations

In some cases, a developer might need more control over versions used, running multiple different OpenACS versions and instances in parallel, etc. Docker allows composing newly configured setups more or less on the fly via specifying more command line options, or via .env files. Such a setup can be provided by creating the docker-compose setup on a common place on the machine and by creating multiple .env files.

git clone https://github.com/oupfiz5/openacs-s6 /var/www/openacs/docker-s6
cd /var/www/openacs/docker-s6/src

Create in this directory e.g. a file named oacs1.env with the following content:

# * Define port and project name
COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-oacs-8071}"

# * Using potentially different compose files
COMPOSE_FILE="${COMPOSE_FILE:-docker-compose.yaml}"

# * Common
TZ="${TZ:-UTC}"

# * Postgres
PGDATABASE="${PGDATABASE:-openacs}"
PGHOST="${PGHOST:-postgres}"
PGPASSWORD="${PGPASSWORD:-testing}"
PGPORT="${PGPORT:-5432}"
PGUSER="${PGUSER:-openacs}"
POSTGRES_DB="${PGDATABASE}"
POSTGRES_PASSWORD="${PGPASSWORD}"
POSTGRES_REPOSITORY="postgres"
POSTGRES_TAG="14.1-alpine"
POSTGRES_USER="${PGUSER}"

# * OpenACS
NS_CONF="${NS_CONF:-/usr/local/ns/conf/openacs-config.tcl}"
OACS_REPOSITORY="${OACS_REPOSITORY:-oupfiz5}"
OACS_IMAGE_NAME="${OACS_IMAGE_NAME:-openacs-s6}"
OACS_IMAGE_TAG="${OACS_IMAGE_TAG:-v3-oacs-5-10}"
OACS_LISTEN_PORT="${OACS_LISTEN_PORT:-8071}"
oacs_httpport='8000'
oacs_db_user="${PGUSER}"
oacs_db_passwod="${PGPASSWORD}"
oacs_db_name="${PGDATABASE}"
oacs_db_host="${PGHOST}"
oacs_db_port="${PGPORT}"
oacs_ipaddress='0.0.0.0'

Important variables are the listen port (OACS_LISTEN_PORT), the project name (COMPOSE_PROJECT_NAME) for running multiple instances at the same time without conflicting. In order to install other versions than the default (v3-oacs-5-10), the variable OACS_TAG can be used. See for more details the README file on GitHub (openacs-s6). For multiple instances, multiple such .env-files can be created.

To use an .env file oacs1.env one can run it as follows

(cd /var/www/openacs/docker-s6/src; docker compose up --env-file oacs1.env)

or run it in daemon mode by adding  a “-d” after up.

Preliminaries

To use this setup, recent versions of docker and docker-compose are needed. There is a constant development, especially on docker-compose. The setup is tested with docker-compose 1.27 (as provided by macPorts on macOS) and with docker compose v2.10.2 under Linux. For docker compose under Linux, it is usually sufficient to get the binary (single file) from the docker download pages.

https://docs.docker.com/
https://docs.docker.com/compose/

Interfacing with MS Teams and related services (Microsoft Graph)

Created by Gustaf Neumann, last modified by Gustaf Neumann 25 Mar 2022, at 02:14 PM

The xooauth package support a basic interface for the Microsoft Graph API, which can be used e.g. with Microsoft Teams.

These interface classes support conversion from/to JSON and to the url-encoded calling patterns on the fly, just by specifying the Tcl variable names with minor annotations (somewhat similar to the export_vars interface). Furthermore, the interface supports pagination: some Microsoft Graph API calls return per default just a partial number of results (e.g. first 100). To obtain all results, multiple REST calls have to be issued to get the full result set. Over this interface, one can specify the desired maximum number of entries.
 
Furthermore, the API supports async operations (create/clone/archive/unarchive teams), where the behavior can be tailored via the options "-wait" and "-donecallback".
 
To use the Microsoft Graph API, an "app" has to be registered/configured/authorized/...[1,2,3] by an administrator of the organization before an access token [4] can be obtained token from the Microsoft identity platform. The access token contains information about your app and the permissions it has for the resources and APIs available through Microsoft Graph. This interface is based on access tokens [4] and the /token endpoint [1] ("Get access without a user") and assumes, one has already obtained the client_id and client_secret to configure this service this way. In theory, this API will allow later to switch to newer versions of the Graph API when newer versions (currently post 1.0) of the Microsoft Graph API will come out.
 
The interface is written in an ensemble style where commands with the same kind of objects share a common prefix (examples are "group", "team", "user", "application", ... see below for more details). It follows strictly to the Microsoft naming conventions and makes it straightforward to extend the interface in the future. The implementation is part of the xooauth package (see also [5]).
 
[1] https://docs.microsoft.com/en-us/graph/auth-v2-service
[2] https://docs.microsoft.com/en-us/graph/auth/auth-concepts
[3] https://docs.microsoft.com/en-us/graph/auth-register-app-v2
[4] https://oauth.net/id-tokens-vs-access-tokens/
[5] https://openacs.org/api-doc/package-view?version_id=5659574&public_p=1&about_package_key=&kind=procs

===========================================================================


ms::app pp ?-list? ?-prefix /value/? /dict/

ms::app application get /application_id/ ?-select /value/?
ms::app application list ?-count /value/? ?-expand /value/? ?-filter /value/? ?-orderby /value/? ?-search /value/? ?-select /value/? ?-top /integer/?

ms::app chat get /chat_id/
ms::app chat messages /chat_id/ ?-top /integer/?

ms::app group deleted ?-count /value/? ?-expand /value/? ?-filter /value/? ?-orderby /value/? ?-search /value/? ?-select /value/? ?-top /integer/?
ms::app group get /group_id/ ?-select /value/?
ms::app group list ?-count /value/? ?-expand /value/? ?-filter /value/? ?-orderby /value/? ?-search /value/? ?-select /value/? ?-max_entries /value/? ?-top /integer/?
ms::app group member add /group_id/ /principals/
ms::app group member list /group_id/ ?-count /value/? ?-filter /value/? ?-search /value/? ?-max_entries /value/? ?-top /integer/?
ms::app group member remove /group_id/ /principal/
ms::app group memberof /group_id/ ?-count /value/? ?-filter /value/? ?-orderby /value/? ?-search /value/?
ms::app group owner add /group_id/ /principal/
ms::app group owner list /group_id/
ms::app group owner remove /group_id/ /user_id/

ms::app team archive /team_id/ ?-shouldSetSpoSiteReadOnlyForMembers /value/? ?-donecallback /value/? ?-wait?
ms::app team channel list /team_id/ ?-filter /value/? ?-select /value/? ?-expand /value/?
ms::app team clone /team_id/ ?-classification /value/? ?-description /value/? -displayName /value/ ?-mailNickname /value/? -partsToClone /value/ ?-visibility /value/? ?-donecallback /value/? ?-wait?
ms::app team create ?-description /value/? -displayName /value/ ?-visibility /value/? -owner /value/ ?-donecallback /value/? ?-wait?
ms::app team delete /team_id/
ms::app team get /team_id/ ?-expand /value/? ?-select /value/?
ms::app team member add /team_id/ /principal/ ?-roles /value/?
ms::app team member list /team_id/ ?-filter /value/? ?-select /value/?
ms::app team member remove /team_id/ /principal/
ms::app team unarchive /team_id/ ?-donecallback /value/? ?-wait?

ms::app user get /principal/ ?-select /value/?
ms::app user list ?-select /value/? ?-filter /value/? ?-max_entries /value/? ?-top /value/?
ms::app user me ?-select /value/? ?-token /value/?
ms::app user memberof /principal/ ?-count /value/? ?-filter /value/? ?-orderby /value/? ?-search /value/?

ms::app run_donecallback /location/ /callback/
ms::app schedule_donecallback /secs/ /location/ /callback/
ms::app token ?-grant_type /value/? ?-scope /value/? ?-assertion /value/? ?-requested_token_use /value/?

Example Usage

#
# Create the interface object for a tenant (named here ms::app).
# For interacting with multiple tenant, define multiple application
# interface objects.
#
::ms::Graph create ms::app \
    -tenant ... \
    -client_id ... \
    -client_secret ... \
    -version v1.0

#
# get the Teams UID for a user (here via email, actual userPrincipalName)
#
set user_info [ms::app user get gustaf.neumann@wu.ac.at]
set user_id [dict get $user_info id]

:  ba34495a-fd40-4c82-bc7b-1f7c778fec34

#
# Get information about a user. We use for output formatting a
# pretty-printer to provide a more readable format pf the dict
# structures returned by the Microsoft graph API:
#
ms::app pp [ms::app user get gustaf.neumann@wu.ac.at]

: @odata.context: https://graph.microsoft.com/v1.0/$metadata#users/$entity
: businessPhones: {...}
: displayName: Neumann, Gustaf
: givenName: Gustaf
: jobTitle: null
: mail: Gustaf.Neumann@wu.ac.at
: mobilePhone: ....
: officeLocation: D2.2.034
: preferredLanguage: null
: surname: Neumann
: userPrincipalName: gustaf.neumann@wu.ac.at
: id: ba34495a-fd40-4c82-bc7b-1f7c778fec34

#
# One can get more information by specifying additional "select"
# attributes, such as e.g. "department" and others (for details, see
# https://docs.microsoft.com/en-us/graph/api/resources/user?view=graph-rest-1.0#properties)

ms::app pp [ms::app user get gustaf.neumann@wu.ac.at -select id,department,companyName,identities,mySite,streetAddress]

: @odata.context: https://graph.microsoft.com/v1.0/$metadata#users(id,department,companyName,identities,mySite,streetAddress)/$entity
: id: ba34495a-fd40-4c82-bc7b-1f7c778fec34
: department: Wirtschaftsinformatik und Neue Medien
: companyName: WU-WIEN
: streetAddress: Welthandelsplatz 1
: mySite: https://wu-my.sharepoint.com/personal/gustaf_neumann_wu_ac_at/
: identities: {signInType userPrincipalName issuer wu.onmicrosoft.com issuerAssignedId gustaf.neumann@wu.ac.at}

#
# Get a list of certain users. We use for output formatting a
# pretty-printer to provide a more readable format of the dict
# structures returned by the Microsoft graph API:
#
ms::app pp [ms::app user list -select id,displayName,userPrincipalName -filter "startsWith(displayName,'Neumann')"]

: @odata.context: https://graph.microsoft.com/v1.0/$metadata#users(id,displayName,userPrincipalName)
: value:
:    id: ba34495a-fd40-4c82-bc7b-1f7c778fec34
:    displayName: Neumann, Gustaf
:    userPrincipalName: gustaf.neumann@wu.ac.at
:
:    id: 4e2b2b37-6c50-4367-9209-bd7392f2e115
:    displayName: Neumann, Lore
:    userPrincipalName: lore.neumann@wu.ac.at

#
# Return the first 10 users. Per default, Microsoft Graph returns the
# first 100 entries. By specifying -max_entries, the interface issues
# potentially several requests for returning the desired amount.
#
ms::app pp [ms::app user list -select id,displayName,userPrincipalName -max_entries 10]

#
# List the first 10 teams/groups
#
ms::app pp [ms::app group list -select id,displayName -max_entries 10]

#
# Get some attributes about a set of teams
#
ms::app pp [ms::app group list -select id,displayName -filter "startsWith(displayName,'TLF')"]

:
: @odata.context: https://graph.microsoft.com/beta/$metadata#groups(id,displayName)
: value:
:    id: b78e7642-...
:    displayName: TLF-TEAM

#
# Get detail info from a team
#
set team_id b78e7642-...
ms::app pp [ms::app team get $team_id]

#
# Get members of a team
#
ms::app pp [ms::app group member list $team_id]

#
# Get owners of a team
#
ms::app pp [ms::app group owner list $team_id]

#
# Add member to a team
#
ms::app group member add $team_id gustaf.neumann@wu.ac.at

#
# Get channels of a team
#
ms::app pp [ms::app team channel list $team_id]

#
# Delete a team
#
ms::app team delete 85f3d2d2-c2d3-...

#
# List deleted groups/teams
#
ms::app pp [ms::app group deleted -filter "startsWith(displayName,'Testing community')"]

: @odata.context: https://graph.microsoft.com/v1.0/$metadata#groups(id,displayName,deletedDateTime)
: value:
:     id: c0030714-656d-4bbe-8d4e-507e73d6f643
:     displayName: Testing community 3
:     deletedDateTime: 2021-10-22T09:14:08Z

 

New Interface for Calling Database functions

Created by Gustaf Neumann, last modified by Gustaf Neumann 10 Mar 2022, at 09:34 AM

Scheduled for OpenACS 5.10.1, there is a new interface available that provides several improvements over the current solutions:

The new interface ...

  • is significantly faster than the existing official OpenACS interface (package_exec_plsql) and easier to use by supporting a standard calling interface (not the special "var_list" which has to be assembled for package_exec_plsql)
  • is about the same performance as the "::xo::db::sql::*" interface without sharing its disadvantages (being a separate pattern, just one backend, ...)
  • is more secure (thorough argument checking on the Tcl and bind-vars level), and
  • more flexible (works with multiple databases and multiple driver types), and
  • much more feature-complete (e.g. support for functions returning tables, etc.)

For more details, check below. The plan is to replace the "::xo::db::sql::*" interface in a first step and replace calls to package_exec_plsql in the supported packages, and to mark the obsolete functions as deprecated. In the near future, more features of the xo::db interface will be added. The plain ugly old interface of OpenACS of calling db-functions where the highest priority.

Implemented features:

  • SQL function can be called for multiple database connection types (driver types nsdb and nsdbi, backend types PostgreSQL and Oracle) with less overhead (in essence, a value added, better performing replacement of "package_exec_plsql"). The  interface provided via xo::db supported just a single combination of the above.

  • Support for PostgreSQL and Oracle

  • Ability to talk to multiple databases from the same OpenACS instance. These databases can be

    • multiple databases of the same driver and backend  (e.g. multiple PostgreSQL databases)
    • multiple databases using different drivers (e.g. nsdb and nsdbi)
    • multiple databases with different backends vendors (e.g. PostgreSQL and Oracle).

    The selection of the backed happens of via the standard "-dbn" parameter. For the nsdb driver, one can use  e.g., "-dbn legacy", an example for nsdbi is "-dbn dbi1",  where the value provided via "-dbn" is passed for uniformity to the "-db" option of the nsdbi API.

  • Support for functions returning composite SQL types (SQL type "record" in PostgreSQL or "table" in Oracle). Results are returned as lists of lists.  This feature is implemented and tested for PostgreSQL connected via nsdb and nsdbi drivers and Oracle via nsdb.

  • SQL functions returning "void" can be called as well. In the Oracle cases, these are "procedures" which have to be called differently.

  • Additional (Tcl-level) argument checking is provided before calling into SQL for improved security and documentation. The API handles among other types the SQL types "integer" and "bigint".

  • Arguments of SQL functions are passed to the database via bind variables (implemented for PostgreSQL with nsdb and nsdbi). This is a security improvement over the previous  implementation in xotcl-core.

  • Standard default handling (argument default values like for all Tcl procs, although the way how defaults are handled is different in PostgreSQL and Oracle).

  • Optional arguments passed in as empty strings are treated as NULL values (standard behavior in OpenACS)

  • Dropped shortcomings of xo::db interface (naming and calling conflicts).

  • Independent of xotcl-core

 Possible further extensions:

  • Argument types could/should be displayed in the API browser (general feature request, not specific to the new DB interface)

Usage:

At startup, a single database interface is creates, which is named "::acs::dc", which takes the parameters from the default  setup of the OpenACS configuration.

 The following command creates a second database interface based on the "nsdbi" driver and define the interface stubs for the nsdbi driver.

::acs::db::require_dc -driver nsdbi -name ::acs::dc2
::acs::dc2 create_db_function_interface

For specifying a different backend, one could use e.g.:

::acs::db::require_dc -backend oracle -name ::acs::dc3

SQL query to a second DB (db pool called "legacy") via nsdb interface:

::acs::dc list_of_lists -dbn legacy get_list {select 1 from dual}

 Call database function from a second DB (db pool called "legacy") via nsdb driver:

::acs::dc call content_item get_latest_revision -dbn legacy -item_id 33357

 Call database function from a second DB via nsdbi driver named "dbi1"

::acs::dc2 call content_item get_latest_revision -dbn dbi1 -item_id 33357

 Return multiple records via nsdb

::acs::dc call content_item get_children -item_id 169303

 Return multiple records via nsdbi

::acs::dc2 call content_item get_children -item_id 169303

 Some calls with performance data (for ds/shell):

lappend _ [package_exec_plsql -var_list {{item_id 33357}} content_item get_latest_revision]
lappend _ [::xo::db::sql::content_item get_latest_revision -item_id 33357]
lappend _ [::acs::dc call content_item get_latest_revision -item_id 33357]
lappend _ [::acs::dc2 call content_item get_latest_revision -item_id 33357]

lappend _ [time {package_exec_plsql -var_list {{item_id 33357}} content_item get_latest_revision} 1000]
lappend _ [time {::xo::db::sql::content_item get_latest_revision -item_id 33357} 1000]
lappend _ [time {::acs::dc call content_item get_latest_revision -item_id 33357} 1000]
lappend _ [time {::acs::dc2 call content_item get_latest_revision -item_id 33357} 1000]
join $_ \n

The results as displayed by ds/shell

33358
33358
33358
33358
402.096544 microseconds per iteration
190.805835 microseconds per iteration
187.885927 microseconds per iteration
171.99336300000002 microseconds per iteration

 

Server-sent events

Created by Gustaf Neumann, last modified by Gustaf Neumann 14 Jan 2022, at 09:29 PM

Server-Sent Events (SSE) is a server push technology enabling a client to receive automatic updates from a server via an HTTP connection, and describes how servers can initiate data transmission towards clients once an initial client connection has been established.

A simple approach for implementing SSE on OpenACS with discussed in a forum thread.

The example below shows, how to use SSE to associate a background job with the client. This can be used e.g. when executing longer a running job in the background and to keep the client up incrementally to date what is currently happening. This can be seen as an alternative to streaming HTML.

Page with associated background activity

ad_page_contract {

    Sample page for emulating streaming HTML via SSE (server side
    events) for a background job. In this example, the same page is
    used as the event sink (where the events are displayed) and as an
    event source (when called with a session_id).

    @author Gustaf Neumann
} {
    {session_id:integer ""}
}

set title "Sample HTML streaming job page (SSE)"
set context $title

if {$session_id ne ""} {
    #
    # We are called by the event handler from JavaScript. This block
    # could be a different, generic script, but we keep it here
    # together for reducing number of files.
    #

    #
    # Set up SSE: return headers and register reporting channel for
    # the session_id.
    #
    set channel [ns_connchan detach]
    ns_connchan write $channel [append _ \
                                    "HTTP/1.1 200 OK\r\n" \
                                    "Cache-Control: no-cache\r\n" \
                                    "X-Accel-Buffering': no\r\n" \
                                    "Content-type: text/event-stream\r\n" \
                                    "\r\n"]
    sse::channel $session_id $channel
    ad_script_abort
} else {
    #
    # We are called as an ADP page.
    #
    set session_id [ns_conn id]

    #
    # Register the SSE event handler in JavaScript.
    #
    template::head::add_script -script [subst {
        if(typeof(EventSource) !== "undefined") {
            var sse = new EventSource("[ns_conn url]?session_id=$session_id");
            sse.onmessage = function(event) {
                if ('__CLOSE_SSE__' == event.data) {
                    sse.close(); // stop retry
                } else {
                    document.getElementById("result").innerHTML += event.data;
                }
            };
        } else {
            document.getElementById("result").innerHTML = "Sorry, your browser does not support server-sent events...";
        }
    }]

    #
    # Run some job in the background and report updates via SSE to the
    # current page. The session_id is used to associate the background
    # job with the client.
    #
    sse::job $session_id {
        foreach i {1 2 3 4} {
            set HTML "<li>finish $i: ...([person::name -person_id [ad_conn user_id]])</li>"
            sse::message -localize $session_id $HTML
            ns_sleep 1s
        }
        sse::close $session_id
    }

    ad_return_template
}

Corresponding ADP page:

@title;literal@
@context;literal@
<h1>Getting updates from job associated with session_id @session_id@</h1> 
<ul> </ul>

Library support

Save this under e.g. acs-tcl/tcl/sse-procs.tcl

#############################################################################
# Simple API for SSE messages via a SESSION_ID, e.g. via a job running
# in the background.
#
# Gustaf Neumann fecit
#############################################################################
namespace eval sse {

  #
  # sse::job
  #
  ad_proc ::sse::job { {-lang} session_id script} {
    
    Execute some (long running) script in the background.  The
    background script is associated via the session_id with the client
    and can send messages to it via sse::message. When done, the job
    should issue sse::close.
    
    @param lang language used for internationalization. If not set,
           taken from the calling environment
    @param session_id associated session_id, used for reporting back
    @param script script to be executed in the background
    
  } {
    if {![info exists lang]} {
      set lang [uplevel {ad_conn locale}]
    }
    nsv_set sse_dict $session_id \
        [list \
             lang $lang \
             ad_conn [list array set ::ad_conn [array get ::ad_conn]] \
             xo::cc [expr {[info commands ::xo::cc] ne "" ? [::xo::cc serialize] : ""} ] \
            ]
    ns_job queue -detached sse-jobs \
        [list ::apply [list session_id "sse::init_job $session_id\n$script"] $session_id]
  }

  #
  # sse::message
  #
  ad_proc ::sse::message { {-localize:boolean} session_id msg} {

    Send a message to the client. In case the channel
    was not jet registered, buffer the message in an nsv dict.
    
    @param localize Perform localization of the provided text
    @param session_id associated session_id, used for associating output session
    @param msg The message to be sent back
  } {
    if {$localize_p} {
      set msg [lang::util::localize $msg [nsv_dict get sse_dict $session_id lang]]
    }
    
    if {[nsv_dict get -varname channel sse_dict $session_id channel]} {
      #
      # The channel is already set up.
      #
      if {[nsv_dict get -varname messages sse_dict $session_id messages]} {
        #
        # The "nsv_dict unset" poses a potential race condition,
        # ... but not for the job application scenario.
        #
        nsv_dict unset sse_dict $session_id messages
        #ns_log notice "--- sse $session_id get buffered messages '$messages'"
      } else {
        set messages ""
      }
      
      lappend messages $msg
      foreach message $messages {
        ns_log notice "--- sse $session_id message '$message'"
        #
        # SSE needs handling of new-lines in the data field; here we
        # send two messages in this case. In other cases, maybe coding
        # newline as literally \n and decoding it on the client might
        # be appropriate.
        #
        ns_connchan write $channel \
            [string cat "data:" [join [split $message \n] "\ndata:"] "\n\n"]
        
        if {$message eq "__CLOSE_SSE__"} {
          ns_connchan close $channel
          nsv_unset sse_dict $session_id 
        }
      }
    } else {
      #
      # Buffer the message.
      #
      #ns_log notice "--- sse $session_id must buffer message '$msg'"
      nsv_dict lappend sse_dict $session_id messages $msg
    }
  }
  
  #
  # sse::channel: register a channel for the session id. This is
  # typically called by the event handler, which is called from
  # JavaScript.
  #
  proc ::sse::channel {session_id channel} {
    nsv_dict set sse_dict $session_id channel $channel
  }

  #
  # sse::init_job: setup a context quite similar to a connection thread
  #  
  proc ::sse::init_job {session_id} {
    set sse_dict [nsv_get sse_dict $session_id]
    eval [dict get $sse_dict ad_conn]
    eval [dict get $sse_dict xo::cc]    
  }
  
  #
  # sse::close: provide an API for closing the session
  #
  proc ::sse::close {session_id} {
    ::sse::message $session_id __CLOSE_SSE__
  }
  
  if {"sse-jobs" ni [ns_job queues]} {
    ns_job create sse-jobs 4
    nsv_set sse_dict "" ""    
    #ns_job configure -jobsperthread 10000
  }
}
#
# Local variables:
#    mode: tcl
#    tcl-indent-level: 2
#    indent-tabs-mode: nil
# End:

Im case you are running NGINX and you are experiencing problems with SSE, check out this entry on stackoverflow.

 

Streaming HTML

Created by Gustaf Neumann, last modified by Gustaf Neumann 11 Sep 2021, at 09:58 AM

Streaming HTML can be used to output HTML content to a user incrementally. This is in particular useful for pages for pages with longer response time, to inform during processing about the progress of the tasks.

Newer OpenACS versions come with templates for the standard themes for streaming pages. The template for streaming pages can be retrieved for the current theme via the API call "template::streaming_template". An application developer should structure streaming HTML pages as follows:

Output top of page:

set context ...
set title ...
ad_return_top_of_page [ad_parse_template \
    -params [list context title] \
    [template::streaming_template]]

Output HTML incrementally:

# Assume, we are collecting HTML in the Tcl variable HTML.
# Send this HTML chunk incrementally to the user
ns_write [lang::util::localize $HTML]

Flush Output from body scripts:
(e.g. template::add_body_script, template::add_event_listener, template::add_body_handler, template::add_script)

ns_write [template::collect_body_scripts]

End of Page:

# Optionally
ns_write [lang::util::localize [template::get_footer_html]]
ns_write [template::collect_body_scripts]

Full sample script:
Putting everything together

set title "Sample HTML streaming page"
set context [list $title]
ad_return_top_of_page [ad_parse_template \
                           -params [list context title] \
                           [template::streaming_template]]
ns_write "<ul>\n"
foreach i {1 2 3 4} {
    set HTML "<li>finish $i: ...</li>\n"
    ns_write [lang::util::localize $HTML]
    ns_write [template::collect_body_scripts]
    ns_sleep 1s
}
ns_write "</ul>\n"
ns_write "<p>Done.</p>"

ns_write [lang::util::localize [template::get_footer_html]]
ns_write [template::collect_body_scripts]

See this sample script in action: https://openacs.org/streaming

Caveat: Windows PCs having the current (Sept 2020) version of Bitdefender (antivirus software ) installed with HTTP traffic scanning activated don't show the incremental rendering of the page, but just the full page after it has finished. This might be seen as a problem of Bitdefender and affects streaming HTML from all sites.

When to use URLencode

Created by Gustaf Neumann, last modified by Gustaf Neumann 09 Aug 2021, at 12:31 PM

In general, HTTP requires that URLs are properly encoded. This is not a big issue, when just plain ASCII characters without special characters are used in URL paths and query variables. However, when the framework allows the end-user to define also URLs (such as in xowiki and derivatives), one has to be careful when extending the framework. The section below refers to the behavior in OpenACS 5.10, earlier version might differ in some details.

URLs in HTML

When URLs are embedded in HTML (href, src, ...) the URL must be both first urlencoded and then HTML quoted to be on the safe side from the
point of view of HTML.  Although most characters problematic for HTML are encoded by ns_urlencode, but the single quote is not (this is not a bug but according to the specs, RFC 3986).

ns_urlencode {<a> 'b' "c" &}
# returns:  %3ca%3e+'b'+%22c%22+%26

Only, when it can be guaranteed the the URL contains no "funny characters" the URLencoding can be omitted. Note that double encoding with ns_urlencode leads to over-quoting in the same way as double encoding with ns_quotehtml.


Return_urls

In general, return_urls have to be proper URL encoded according to the HTTP specs. Setting these URLs is more complex, since one has to be aware whether or not an URL as encoded before or not before passing it as a return URL.

Here is a short guideline:

1) Query functions return URLs always URLdecoded

  • ns_conn url
  • ad_conn url
  • xo::cc url

2) Output functions return URLs per default URLencoded

  • export_vars (input parameter "-base" has to be unencoded)
  • ad_return_url
  • :pretty_link

3) Redirect operations have to receive encoded input

  • ad_returnredirect
  • ns_returnredirect
  • :returnredirect

4) Query-variables

When setting query variables with URLs, these should be already URL encoded

   #
   # Setting a query variable
   # 
   set return_url [ad_returnurl]
   set url [export_vars -base . {return_url}]

   #
   # Using query-variable as default value in xo* package
   #
   ad_returnredirect [:query_parameter return_url:localurl [ad_return_url]]

   #
   # Usage in classical OpenACS
   #
   ...
   # get return_url e.g. via page_contract
   ...
   if {[info exists return_url] && $return_url ne ""} {
        ad_returnredirect $return_url
    }

 

Next Page
previous April 2024
Sun Mon Tue Wed Thu Fri Sat
31 1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 1 2 3 4

Popular tags

17 , 5.10 , 5.10.0 , 5.9.0 , 5.9.1 , ad_form , ADP , ajax , aolserver , asynchronous , bgdelivery , bootstrap , bugtracker , CentOS , COMET , compatibility , CSP , CSRF , cvs , debian , docker , docker-compose , emacs , engineering-standards , exec , fedora , FreeBSD , guidelines , host-node-map , hstore
1 registered user in community xowiki
in last 30 minutes
Contributors

OpenACS.org