Forum OpenACS Development: Re: Res: Re: Res: Re: Experiences clustering OpenACS aolservers?

Hello again!

We are experiencing our first problems with the cluster 😊

The proc "update_cache_local" has a runtime of 8 seconds.
During this time the cluster is blocked. We are study the distribution of this time within the procedure and the greater cost is the copy in memory (array set, nsv_array reset) at the beginning and at the end of the proc that maintains site-nodes (in our case, greater than 110000).

Any idea to optimize it?


array set nodes [nsv_array get site_nodes]
array set url_by_node_id [nsv_array get
array set url_by_object_id [nsv_array get
array set url_by_package_key [nsv_array get
nsv_array reset site_nodes [array get nodes]
nsv_array reset site_node_url_by_node_id [array get
nsv_array reset site_node_url_by_object_id [array get
nsv_array reset site_node_url_by_package_key [array get

Posted by Joe Oldak on
Apologies for the long rambling reply, hopefully you can extract some useful thoughts...

In the "old" version of the code, there were some bits of code which would modify the in-memory copy on certain operations. However now what happens is that it always calls update_cache_local with the node you have changed - this code is necessary for the cluster peers, and so it also runs it on the local peer too. This is less efficient than it used to be - but has the advantage of actually working 😉

Unfortunately the update_cache_local is quite slow, especially with so many site nodes. (what's on your site, so that you have so many!?)

And, also unfortunately, the job of synchronising the local site_nodes store whenever one of the cluster nodes changes the tree is also unavoidable.

There are a few things that could be done about this. I'll just mention a few passing thoughts:

There are a few things that could be done to reduce the amount of times the caches are flushed:

Firstly, simply make sure that the cluster nodes aren't sending unnecessary flush requests. And when they do send them, make sure they aren't unnecessarily using the "sync_children" option, as this results in a lot more db grinding.

Sometimes, I find a single request can send several flush requests to the peers, which is unnecessary. Perhaps what could happen is that the flush requests are saved up during the page fetch and sent at the end, with duplicates removed.

One thing that just occurred to me is that update_cache_local actually works on local arrays, so only needs a full writelock on the nsv_arrays at the end of the function, as it copies the data into them. This could certainly alleviate the problem a lot!

(just put a readlock around the start where it loads the arrays into local store, and a writelock where it puts it back)

Of course - if there are a LOT of nodes then the memcopying of this data from the nsv_array to the local arrays could be the time consuming part, rather than the update itself. If this is the case then something cleverer involving in-place editing of the nsv arrays could be done??

Perhaps you could add a bit of code around the function to see if it's the coping in/out that is taking the time, or whether its the db reading. I imagine in the cases wherey ou are just updating a single node and no children then the coping will be the majority of the time. However where you are syncing a lot of nodes the db access will be the majority.

In the long term, it could be good to be able to share among peer nodes the details of what has changed, rather than just the fact that "something" has changed - then we could be more efficient about things.

I'll stop here to avoid further confusion, but will post more responses if you wish to delve further into any of the thoughts!