Forum OpenACS Development: Re: Sub-package site nodes API thought

Posted by Jerry Asher on
We don't want completely ugly URLs.  We don't want cookie based context management.  I don't understand the window cloning URL problem, though I do agree it is vital to be able to surf with two windows, and to take the URL from one window and paste it into another window and go from there.

If I understand this thread, I appreciate the context bar as push-down-list argument and the need to be able to capture as much information as required in the pdl (and possibly to  operate it on the way up.)

There is also the question of emailing URLs.  What happens when one of the URLs is emailed out and later returned to?  It's the URL session capturing problem again.

Can we handle this on the server side?  Do a trick and create a server side database mapping the URL PDL + new URL + new params to a tinyurl?  Then just pass each new tinyurl key around as a URL appendage?

Say we provide an API that assigns a tinyurl to any URL, and we do that just by numbering the request for each tinyURL in say base-62 {[0-9][A-Z][a-z]}, 62^6 is 56 billion hits, or 7 years of AOL demand circa WTR publishing date. So such a tinyURL key would look something like: 2X78be, um but at certain points it would also contain ufark so beware.

This database doesn't get terribly large as we can throw out all entries that are older than a session lifetime (or maybe a max-lifetime-current-sessions, or older than 24 hours....)

[ actually provides a repeatability function that we don't want'http://a';) will always return the same result.  I don't think we want that, we want to be able to detect and toss out old requests.]

The request processor can detect the URL appendage (maybe it has some ACS unique location, or a uniqifier (I hate that word, but I love the way it sounds and its function).  The request processor can detect mailed URLs in that they come in and do not have a session cookie.  And if they don't it removes the URL appendage.

This doesn't need to be stored in the database, NSVs should be sufficient -- worse case a server crashes and the visitors context is lost.  (Hell when the server comes back up we're probably going to make her relogin anyway).  In a clustered environment, the usual cluster solutions come into play -- store it in the db for programmer ease, or pass it around the cluster on our own for performance.

And then we provide an API for any page to register the new URLs, or lookup the old ones.

f(URL, query params, tinyURLold) => tinyURLnew
g(tinyURLnew) => URL, query params, tinyURLold

It may be a bad idea to store every query param in there.  Do we want the user_id stored in there as well?  Maybe not.  As a recommendation, keep user_id and similar privacy concerned (which?) parameters out of that db.

It's 1:14PM, do I need more coffee?