Forum OpenACS Q&A: how to config for multiple aolserver instances

Hi all,

I know there are a bunch of past threads related to what is essentially virtual hosting. I've read through a good chunk of them. I want to set up a dev environment and a production environment on the same box - production serving pages on www.physworx.com port 80, dev serving pages on dev.physworx.com port 8000 (or should it be www.physworx.com port 8000?). I've been reading through the cvs documentation to better understand what we do during the cvs portion of the openACS install - helped me to figure out how to use the repository to create an additional openACS instance in /web (production is physworx, dev is physdev) by checking out from /cvsroot. I'll write out a doc on the 'least one needs to know' to work from a dev checkout and then, once tested, merge back to the repository and update the production instance.

So here's what I have - two checkouts of the openACS repository in /web: one called physworx, one called physdev - each having their own db as well. www.physworx.com serves up on port 80 no probs. I'm trying to figure out how to start a second instance of aolserver to serve on port 8000 for dev.physworx.com. Based on the threads I was checking out, it seems that this is the way to go for what I'm trying to accomplish. Unfortunately, no thread really explained how to do so - I've put a config file with the proper ports, server names and what not in each. physworx is setup with daeomontools and starts no probs. After starting, I try to start another instance with /usr/local/aolserver/bin/nsd-postgres -t /web/physdev/etc/config.tcl. No errors pop up and I think an instance starts but no pages come up at dev.physworx.com:8000.

Can anyone offer guidance?

Collapse
Posted by Andrew Piskorski on
Brad, I think you're confusing several different things. This simplest thing is to serve your Production site at www.physworx.com port 80 and Dev at www.physworx.com port 8000 (or whatever port you want). This involves no virtual hosting of any kind, no special software or configurations. All you do is put the proper host names and port numbers in your AOLserver config file. Um, well and you should also go to the /admin/site-map/ UI in each OpenACS instance and for teh ACS Kernel, set SystemURL as appropriate. E.g., "http://www.physworx.com:8000" for Dev. I think that's it.

However, many people don't like their servers to have URLs like "http://www.physworx.com:8000", they want "http://dev.physworx.com" instead. There are two ways to do this: A second IP address (by far the simplest), or virtual hosting. If you don't have the second IP address, then you get to take one of the many paths for setting up virtual hosting, which you've been reading about...

Collapse
Posted by Brad Ford on
Thanks Andrew. As it turns out, I had it set up exactly as you recommended. And it turns out that everything works - I just couldn't see it as I was working remotely and the name servers hadn't propagated yet. All works well now. Since I'm the only one doing dev work on the server, I'll stick with the :8000 on the end of my url - looks like virtual hosting is a bit of a bear. Has anyone gotten ssl/https working with virtual hosting yet? It didn't look like it from the threads I was going through.
Collapse
Posted by Brad Duell on
Here's how I got virtual hosting to work with squid and tinydns (this is cleaner and works *much* better for me than reverseproxy via apache).  Oh yeah, and it can work via ssl too.  (forgive me if I leave something out - thanks to Jon Griffin and Cathy Sarisky for helping me get this up and running):

This requires the install of daemontools (which runs services like tinydns), djbdns (which supplies tinydns), and squid (which proxies requests to port 80 to the correct internal IP address)...

Choose an internal ip range you'll want to serve your virtual servers on (in my case, I'll serve up 192.168.1.2 to 192.168.1.x).

Tie those internal IP addresses to eth0 (so when you do an ifconfig you see all of these ip addresses bound to eth0).

Setup djbdns (use http://cr.yp.to/djbdns/run-server.html as a guide) with the following (say my domain is mydomain.com with the external IP address of 66.1.1.1):
    Your external IP address as the dns server.
    Your internal IP addresses with their respective domains.
    You should ultimately have a /service/tinydns/root/data file resembling:
    -----------------------------------------
    .mydomain.com:66.1.1.1:a:259200
    .1.168.192.in-addr.arpa:66.1.1.1:a:259200
    =mydomain.com:192.168.1.2:86400
    +www.mydomain.com:192.168.1.2:86400
    =dev.mydomain.com:192.168.1.3:86400
    =xml.mydomain.com:192.168.1.4:86400
    =mydomain2.com:192.168.1.5:86400
    +www.mydomain2.com:192.168.1.5:86400
    +mail.mydomain2.com:192.168.1.5:86400
    -----------------------------------------
    This has mydomain.com and www.mydomain.com pointing to the instance on 192.168.1.2
    This has dev.mydomain.com pointing to the instance on 192.168.1.3
    This has xml.mydomain.com pointing to the instance on 192.168.1.4
    This has mydomain2.com and www.mydomain2.com and mail.mydomain2.com pointing to the instance on 192.168.1.5

Add "search localdomain" (without quotes) to the top of your /etc/resolv.conf file.

Use dig to check your setup once tinydns is up and running (i.e. "dig xml.mydomain.com" should give me 192.168.1.4).

Setup squid (I found http://squid.visolve.com/white_papers/reverseproxy.htm to be informative):
    I simply changed my squid.conf file to:
    # http_port 3128
    # http_access deny all
    # httpd_accel_port 80
    # httpd_accel_single_host off
    # httpd_accel_uses_host_header off
    To:
    http_port 127.0.0.1:80
    http_access allow all
    httpd_accel_host virtual
    httpd_accel_port 80
    httpd_accel_single_host off
    httpd_accel_uses_host_header on

Startup squid.

Change your respective server instances to run on their correct internal IP addresses and port 80.  And start them up.

Your box should now be serving port 80 to the outside the correct server running on that instance's port 80 on the inside.

This seems like a lot of work up front (and it is), but adding more hosts is a breeze, and this method seems to work all the way around.

Hope this helps.

Collapse
Posted by Brad Ford on
Thanks a bunch Brad! That sounds like just the solution I'm looking for. I'll be trying it out this week. And you're sure no problems with https using this method? I assume the /service/tinydns/root/data file needs to be expanded to add https, ftp, ssh, smtp, and any other ports the system is serving? Or if only one instance of each on the box, will tinydns be bypassed and the services listen directly on those ports?
Collapse
Posted by Brad Duell on
Actually, the only place that ports need to be set up is in your squid.conf file.  By default (the default squid.conf), the SSL port 443 is considered a safe port, I believe.

This is all under the ACCESS CONTROLS section of squid.conf.

Collapse
Posted by Bart Teeuwisse on
Brad,

while squid supports SSL this does NOT mean that squid can REVERSE proxy SSL servers.

See also http://www.squid-cache.org/mail-archive/squid-users/200005/0745.html as well as various threads on openacs.org.

/Bart

Collapse
Posted by Brad Ford on
Wow, that was over my head... Bart, am I correct in interpreting that thread and the other openacs.org threads to mean that https/ssl is inherently not a possibility with virtual hosting because of the layers between the servers? Any chance you could summarize in layman's terms - would be greatly appreciated.
Collapse
Posted by Bart Teeuwisse on
Brad,

that is (partially) correct. Yes, you can NOT proxy an SSL server. That is you can NOT setup the following scenario:

- https://dev.domain.com/ and
- https://xml.domain.com/

both behind a proxy. The proxy can NOT pass the https requests on to the virtual domains.

However, you can setup a proxy server that handles ALL SSL negotiations and passes the https requests on as http requests to the appropriate virtual domain. In other words, when the proxy receives a request for https://dev.domain.com/ it will authenticate the secure request and forward the request to http://dev.domain.com/. The virtual web servers never see a secure connection.

Pound (http://www.apsis.ch/pound/) is a reverse proxy that I know of to support this configuration.

In order to do this, the proxy would require a wildcard certificate for *.domain.com so that it can authenticate requests for both subdomains in the above example.

/Bart

Collapse
Posted by Jon Griffin on
Pound doesn't play nice with OACS or, for that matter any server that allows html streaming. I would advise against using it (unless they changed this in the last couple of months which I doubt).

Your APM pages will break as well as some others. Sorry.

Collapse
Posted by Bart Teeuwisse on
Jon,

you are absolutely right. While Pound is small and fast, it doesn't work well with HTML streaming. I merly pointed to Pound as an example of accepting SSL connections to virtual servers.

/Bart

Collapse
Posted by Jon Griffin on
I just wanted to make sure no one else goes through the headache I had 6 months or so ago when I implemented pound and didn't realize what the intermittent OACS problems were.
Collapse
Posted by Brad Duell on
I suppose I should have referred to my squid.conf file for the SSL question of your's, Brad - I forgot a setting.

Adding the following single line to your squid.conf (you can put it after your http_port declaration) will get you the same wildcard configuration that Bart stated, but with a much better proxy server:

https_port 127.0.0.1:443 cert=/PATH_TO_CA_CERT/cacert.pem key=/PATH_TO_KEY/key.pem version=1

Note, that you'll need 2.5+ for this feature (I used squid-2.5.STABLE3-1rh_7x available via http://swelltech.com/support/updates/squid/7.x/RPMS/).

I wouldn't suggest using Pound either.  Plus, Squid is well on it's way to handling multiple certs with this same type of setup.  A good page to watch is http://squid.sourceforge.net/ssl/

Happy proxying!

Collapse
Posted by Brad Ford on
Gotta love a lively thread! Thanks for all the info everyone - very educational. So it seems that squid either properly implements ssl through proxying or is on the verge of it. Brad - have you done any packet sniffing on https to see if everything works? Just asking - I wouldn't have a clue where to begin or what to look for but I understand this is a way to verify ssl is implemented properly. If everything works great, sounds like this could be a canonical solution to be added as an appendix to the install docs.
Collapse
Posted by Brad Duell on
Yes, Squid implements SSL proxying via the single cert, and from what I understand hopes to do so via multiple certs based upon the domain request.

Works for me - try it out and let me know.

Collapse
Posted by Bart Teeuwisse on

Brad,

have you tried enforcing secure connections to (parts of) an OpenACS site. You can't do that in OpenACS any more because Squid only supports HTTP between the virtual server and the proxy (Squid).

I've tried using a Squid redirector but that doesn't seem to work as redirectors are called when the Squid contacts the virtual server. Each request to the virtual server is a HTTP request one can not redirect HTTP requests to HTTPS requests. Redirected HTTPS requests show up in to the redirector as HTTP request and thus there doesn't appear to be a way to use redirectors.

Then again it might be my mistake so here's what I've done:

The redirector code (in Tcl of course).


!/usr/local/bin/tclsh

# squid redirector program. Squid has been configured to call the
# redirector for HTTP requests only. This program then redirects those
# requests to the HTTPS port.

# Keep a log of all redirects. Squid keeps the log open for as long as
# squid is running. Stop squid see the contents of the log.

set log [open "/var/log/squid/redir.log" a+]
while {[gets stdin line] >= 0} {
    foreach {url addr_fqdn ident method} [split $line ] {

        # Only redirect http requests.

        regsub -nocase -- http: $url https: redir_url

        # Log the redirect. First the verbatim request to squid
        # followed by the URL the request is redirected to.

        puts $log "\[$url $addr_fqdn $ident $method\] --> $redir_url"

        # Return the redirection URL to squid.

        # puts "301:$redir_url"
    }
}
close $log

Additional Squid.conf lines


redirect_program /etc/squid/redirector
acl http port 80
acl https port 443
redirector_access allow http
redirector_access deny https

But now matter how I configure Squid it seems to always call the redirector, even for https connections.

/Bart

Collapse
17: Pound vs. ns_write (response to 10)
Posted by Andrew Piskorski on
Jon, incidentally, Gustaf Neumann says on the AOLserver list that he's fixed Pound to work with ns_write, as well as other things.
Collapse
Posted by Brad Duell on
Bart,

(sorry this took me a while to get around to testing)

No, I haven't tried it, but I was able to reproduce what you were seeing.

I'll try to look into what a workaround might be for areas of a site.  Any luck with other proxy's in this https->http<->http configuration?

Collapse
Posted by Bart Teeuwisse on
Brad,

Just returned from climbing in the Alps. No, I haven't tried other proxies in conjunction with HTTPS. The changes to pound by Gustav look very promissing though. But I'm holding off till his patches make it to the standard distribution of pound and till AOLserver supports X-forwarded-header so that it can log the IP address that the HTTP(S) request originated from.

Another issue with Squid is that SSL support appears to be incomplete.

https_port 127.0.0.1:443 cert=/PATH_TO_CA_CERT/cacert.pem key=/PATH_TO_KEY/key.pem version=1

Shouldn't cert point to the cert of the web server and not of the CA? And were should the CA cert reside? I looked at the code and there seems to be a ca_cert command line parameter.

Clients reject the SSL certificate in my current Squid configuration because the CA cert is missing. Are you experiencing the same problem?

/Bart

Collapse
Posted by Brad Duell on
Bart,

Welcome back - hope you had a good time!

No, I don't experience that problem.  As an example:

http://www.kyoteproductions.com and
https://www.kyoteproductions.com

Both use the squid configuration outlined in this thread.

I put my cacert.pem in the ca directory of the server, and my key.pem in the modules/nsopenssl directory of the server.  Perhaps you're experiencing a permissions problem?

I'd be interested in seeing the much-needed changes in Pound.  Since I don't use SSL for any sites (but my own), and since I don't need to restrict SSL for certain parts of my site, the current configuration works fine.

If Pound is able to resolve the subsite SSL issues then I'll simply plug it in to the same configuration that I have with tinydns and be good to go.

As it is, the current configuration with Squid proxy is the most sound solution I've come across thus far.

Collapse
Posted by Bart Teeuwisse on
Brad,

the problem I'm expriencing is not with the SSL configuration of AOLserver but with the SSL configuration of Squid. When running virtual servers behind a Squid proxy, then it is Squid who handles the SSL connection with the client.

Connecting a web browser (other than links, who doesn't seem to care about the CA cert) to Squid results in complaints that the certificate path is broken. My impression is that this is because Squid doesn't know where the CA cert is.

Anyone else who could comment?

/Bart

Collapse
Posted by Bart Teeuwisse on
Brad,

in fact, your https example suffers the same problem. Maybe you don't notice it anymore because you accepted the incomplete certificate for ever in the past. But when I follow the link to https://www.kyoteproductions.com I get the same error as with my servers.

/Bart

Collapse
Posted by Brad Duell on
Bart,

Is the certificate you're accessing one that you created yourself or did you create it through an authority?

Are you sure this is a proxy problem, and not based upon your certificate?

I get the same exact "company you have not chosen to trust" message, and certification path, whether accessing my site's https through proxy, or directly, simply because I created the certificate myself.

Collapse
Posted by Bart Teeuwisse on
Brad,

while your certificate is a self signed certificate (E.g. a certificate you created yourself) mine is created through an authority. However in both cases, our certificates fail because the certificate authority portion of the certificate is missing.

My certificate works when I connect to the virtual server directly. Connecting to your server (via the proxy) I can see that certificate is self generated but that the certificate authority (also www.kyoteproductions.com) is lacking.

If the certificate authority was in place your warning message would be different. It would read something like: the certificate was issued by an authority you don't know or thrust.

/Bart

Collapse
Posted by Brad Duell on
Bart,

Hmmm.  All I can say is that something must be wrong with your certificate, or referencing the certificate.  On the off-chance, I replaced my self-signed certs with one's by Thawte, and lo-and-behold the certification path *does* show correctly.

If your certificate works directly, but not through the proxy, the only thing I can think of is you might be referencing the wrong files in your squid.conf.

Collapse
Posted by Bart Teeuwisse on
Finally worked it out. The missing piece is indeed the CA cert. In AOLserver, one specifies the server cert, the server key and the CA cert. Squid on the other hand only accepts the server cert and the server key.

What I couldn't figure out is where Squid gets the CA cert from. After several hours of reading code and googling I finally traced the location of the CA certs. Squid relies on the CA certs provided with openssl. This explains why it worked for Brad but not for me. Our CA cert was not included in the default openssl list of CA certs.

The openssl CA certs are listed in /usr/share/ssl/cert.pem. Adding our CA cert to this list resolved the issue.

/Bart

Collapse
Posted by Bart Teeuwisse on

It appears that redirecting HTTP to HTTPS in Squid 2.5 is not trivial. From the squid-users archives:

Squid-2.5 has the peculiar limitation that requests accepted by https_port will internally be processed as http:// requests, meaning that http:// is sent to redirectors etc.

What you can do to differentiate http_port from https_port in squid-2.5 is to enable httpd_accel_port virtual, then look for the port number in your redirector, clean up the url etc.

Another option is to look into Squid-3.0 where this is a whole lot easier and does not require a redirector helper.

/Bart