Forum OpenACS Q&A: Re: OpenACS clustering setup and how it relates to xotcl-core.

Small update: the parameter for controlling the caching mode is now called "cachingmode", since this leaves more room for potential other future caching modes (e.g. some improved cluster mode or special caching nodes)

ns_section ns/parameters {
   ...
   ns_param        cachingmode none   ;# experimental caching mode
   ...
}

The acs-cache-procs are updated such that the per-thread-cache works like a per-request-cache when the "cachingmode" is configured to "none".

Thanks Gustaf,

Sorry for the late reply, I had been pulled away on migrating some servers. I did compile up the newest version back on the 22nd and set the nocache parameter. We did get acceptable speeds without caching from this particular test. I had not setup the cluster parameters yet though. Below are the results of the tests with and without nocache.

Note: These tests were run with our optimized queries that we fixed in our effort reporting package. When I ran the tests with the non-optimized slower queries, the difference was almost 1 second slower between cached and non cached tests in the 95th percentile

with nocache = false

# Test #23   48 CPU's   200 Gig RAM
# Nginx with 3 Naviservers
#   maxconnections 1000
#   maxthreads 20  
#   minthreads 20  
#   connsperthread 10000
#   highwatermark 100
#   compressenable  off
#   rejectoverrun true 
#    image pool 6/6    
# ns_section ns/db/pool/pool1
    ns_param        connections        23    
#  DB on Same VM
# ns param nocache = false
 
150 VUs
Connection Threads: min 20 max 20 current 20 idle 16 stopping 0 waiting 0
Request Handling:   requests 858, queued 0 (0.00%), spooled 554 (64.57%)
Request Timing: avg queue time 105.18µs, avg filter time 4.24ms, avg run time 161.33ms avg trace time 1.03ms
 
pool1:  statements 107.5K gethandles 970 handles 23 connected 7 used 13 waittime 0.165362 sqltime 99.706249 avgwaittime 170.5µs avgsqltime 927.5µs
 
300 VUs
Connection Threads: min 3 max 5 current 3 idle 2 stopping 0 waiting 0
Request Handling:   requests 3, queued 0 (0.00%), spooled 2 (66.67%)
Request Timing: avg queue time 74.33µs, avg filter time 490.51ms, avg run time 19.45ms avg trace time 7.14ms
 
pool1:  statements 370.2K gethandles 2.9K handles 23 connected 7 used 20 waittime 0.246876 sqltime 371.667285 avgwaittime 85.4µs avgsqltime 1ms
 
150 VUs
Connection Threads: min 20 max 20 current 20 idle 17 stopping 0 waiting 0
Request Handling:   requests 4.7K, queued 0 (0.00%), spooled 3K (66.18%)
Request Timing: avg queue time 130.8µs, avg filter time 4.27ms, avg run time 201.81ms avg trace time 1.15ms
 
pool1:  statements 672.1K gethandles 4.9K handles 23 connected 15 used 20 waittime 0.248934 sqltime 686.242544 avgwaittime 51.2µs avgsqltime 1ms
 
70 VUs
Connection Threads: min 3 max 5 current 3 idle 2 stopping 0 waiting 0
Request Handling:   requests 5, queued 0 (0.00%), spooled 4 (80.00%)
Request Timing: avg queue time 73.2µs, avg filter time 296.22ms, avg run time 16.18ms avg trace time 4.63ms
 
pool1:  statements 764.7K gethandles 5.4K handles 23 connected 15 used 20 waittime 0.249513 sqltime 773.483103 avgwaittime 46µs avgsqltime 1ms
 
K6 summary 
running (4m09.2s), 000/300 VUs, 2757 complete and 0 interrupted iterations
default ✓ [======================================] 000/300 VUs  4m0s
 
     ✗ status is 200
      ↳  99% — ✓ 11022 / ✗ 2
     ✗ page succeeded
      ↳  99% — ✓ 11022 / ✗ 2
 
     █ setup
 
     █ teardown
 
     checks.........................: 99.98% ✓ 22044     ✗ 4
     data_received..................: 2.5 GB 10 MB/s
     data_sent......................: 9.7 MB 39 kB/s
     http_req_blocked...............: avg=80.69µs  min=0s     med=4.3µs    max=32.55ms  p(90)=6.31µs   p(95)=7.37µs
     http_req_connecting............: avg=11.55µs  min=0s     med=0s       max=28.93ms  p(90)=0s       p(95)=0s
   ✓ http_req_duration..............: avg=207.03ms min=0s     med=34.16ms  max=1.45s    p(90)=946.89ms p(95)=1s
       { expected_response:true }...: avg=207.05ms min=5.21ms med=34.17ms  max=1.45s    p(90)=946.89ms p(95)=1s
     http_req_failed................: 0.01%  ✓ 2         ✗ 16535
     http_req_receiving.............: avg=519.41µs min=0s     med=115.54µs max=63.53ms  p(90)=1.37ms   p(95)=1.56ms
     http_req_sending...............: avg=26.43µs  min=0s     med=23.84µs  max=855.82µs p(90)=43.09µs  p(95)=50.97µs
     http_req_tls_handshaking.......: avg=63µs     min=0s     med=0s       max=29.88ms  p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=206.48ms min=0s     med=33.99ms  max=1.45s    p(90)=945.49ms p(95)=1s
     http_reqs......................: 16537  66.369066/s
     iteration_duration.............: avg=11.23s   min=1.61µs med=11.24s   max=13.28s   p(90)=11.35s   p(95)=11.39s
     iterations.....................: 2757   11.064855/s
     vus............................: 1      min=1       max=300
     vus_max........................: 300    min=300     max=300
 

nocache = true

# Test #22   48 CPU's   200 Gig RAM
# Nginx with 3 Naviservers
#   maxconnections 1000
#   maxthreads 20  
#   minthreads 20 
#   connsperthread 10000
#   highwatermark 100
#   compressenable  off
#   rejectoverrun true 
#    image pool 6/6    
# ns_section ns/db/pool/pool1
    ns_param        connections        23     
#  DB on Same VM
# ns param nocache = true
 
150 VUs
Connection Threads: min 20 max 20 current 20 idle 15 stopping 0 waiting 0
Request Handling:   requests 836, queued 0 (0.00%), spooled 539 (64.47%)
Request Timing: avg queue time 100.37µs, avg filter time 6.93ms, avg run time 157.66ms avg trace time 1.25ms
 
pool1:  statements 117.6K gethandles 938 handles 23 connected 6 used 13 waittime 0.170185 sqltime 96.248608 avgwaittime 181.4µs avgsqltime 818.6µs
 
300 VUs
Connection Threads: min 20 max 20 current 20 idle 10 stopping 0 waiting 0
Request Handling:   requests 2.8K, queued 9 (0.33%), spooled 2K (64.85%)
Request Timing: avg queue time 166.06µs, avg filter time 7.66ms, avg run time 187.04ms avg trace time 1.45ms
 
pool1:  statements 407.8K gethandles 2.9K handles 23 connected 8 used 21 waittime 0.267667 sqltime 375.288688 avgwaittime 93.4µs avgsqltime 920.3µs
 
150 VUs
Connection Threads: min 20 max 20 current 20 idle 12 stopping 0 waiting 0
Request Handling:   requests 4.7K, queued 10 (0.21%), spooled 3K (65.63%)
Request Timing: avg queue time 167.3µs, avg filter time 7.76ms, avg run time 202.97ms avg trace time 1.51ms
 
pool1:  statements 739K gethandles 4.8K handles 23 connected 12 used 21 waittime 0.269563 sqltime 694.840813 avgwaittime 56.1µs avgsqltime 940.3µs
 
70 VUs
Connection Threads: min 20 max 20 current 20 idle 18 stopping 0 waiting 0
Request Handling:   requests 5.3K, queued 10 (0.19%), spooled 3K (65.96%)
Request Timing: avg queue time 160.41µs, avg filter time 7.7ms, avg run time 205.11ms avg trace time 1.5ms
 
pool1:  statements 846.7K gethandles 5.4K handles 23 connected 17 used 21 waittime 0.270101 sqltime 787.473052 avgwaittime 50.2µs avgsqltime 930.1µs
 
 
K6 Summary
running (4m10.2s), 000/300 VUs, 2736 complete and 0 interrupted iterations
default ✓ [======================================] 000/300 VUs  4m0s
 
     ✓ status is 200
     ✓ page succeeded
 
     █ setup
 
     █ teardown
 
     checks.........................: 100.00% ✓ 21888     ✗ 0
     data_received..................: 2.5 GB  10 MB/s
     data_sent......................: 9.6 MB  38 kB/s
     http_req_blocked...............: avg=81.66µs  min=1.6µs   med=4.39µs   max=39.67ms  p(90)=6.49µs   p(95)=7.68µs
     http_req_connecting............: avg=12.47µs  min=0s      med=0s       max=35.3ms   p(90)=0s       p(95)=0s
   ✓ http_req_duration..............: avg=218.86ms min=7.34ms  med=37.47ms  max=1.55s    p(90)=978.96ms p(95)=1.04s
       { expected_response:true }...: avg=218.86ms min=7.34ms  med=37.47ms  max=1.55s    p(90)=978.96ms p(95)=1.04s
     http_req_failed................: 0.00%   ✓ 0         ✗ 16418
     http_req_receiving.............: avg=526.47µs min=33.71µs med=121.17µs max=22.53ms  p(90)=1.39ms   p(95)=1.58ms
     http_req_sending...............: avg=27.35µs  min=9.2µs   med=24.76µs  max=188.41µs p(90)=45.25µs  p(95)=53.62µs
     http_req_tls_handshaking.......: avg=62.98µs  min=0s      med=0s       max=30.12ms  p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=218.3ms  min=7.26ms  med=37.27ms  max=1.55s    p(90)=977.52ms p(95)=1.04s
     http_reqs......................: 16418   65.62951/s
     iteration_duration.............: avg=11.3s    min=1.7µs   med=11.31s   max=13.26s   p(90)=11.45s   p(95)=11.49s
     iterations.....................: 2736    10.936919/s
     vus............................: 1       min=1       max=300
     vus_max........................: 300     min=300     max=300

Also, thanks for adding the cachingmode. It would be good to run some tests on that as well. We would appreciate any help and insights you can give on how to setup a special cachingmode for clustering. One thing we are wanting to test was adding the ::acs::clusterwide to the nsv_unset in ad_parameter_cache. Are there other places you would recommend us adding the ::acs::clusterwide to, for our testing of the cluster? Or any other directions you might suggest?

Thanks for all you help, -Marty