Forum OpenACS Q&A: Lots of tcl_url2file request processor errors in log file
[21/Apr/2004:09:09:38][2108.4101][-conn:ibr::1] Notice: RP (10.544 ms): error in rp_handler: can't read "tcl_url2file(/)": no such element in array [21/Apr/2004:09:09:52][2108.7176][-conn:ibr::4] Notice: RP (13.849 ms): error in rp_handler: can't read "tcl_url2file(/directorynamehere/)": no such element in arrayAny ideas of what's going on? Is this something to worry about?
You were running in performance mode? actually "/directorynamehere/" sounds suspicious, where does that come from? Which package?
As far as PG vs. Oracle goes ... I haven't noticed drastic differences between the two. Memory usage in PG should mostly be determined by your PG configuration parameters. Speed should be roughly comparable for a wide range of operations.
Make sure you ANALYZEd your tables.
Of course you give two reasons why it might be your fault not PG's fault and I suspect you may be on to something :)
Ouch, something got trashed, yes, I'd worry. If it happens again I'd add diagnostic code to see why tcl_url2file("/") fails. The array caching the information got hosed somehow.
Ouch, those are all over our log file as well. Since they were always there, I assumed it was just a matter of a cache miss being logged the first time a url was requested.
Checking, I see 8000 occurences this morning, with 3000 different urls.
I just realized now that the error message is sometimes
no such element in array
which should be normal for a cache miss, and sometimes
no such variable
which seems BAD.
I might be totally off the mark here - since I don't really understand the inner workings of the request processor.
Thank you all, for your comments. I always appreciate feedback.
I turned off performance mode and restarted, and the error messages went away.
Re: performance on Postgres
I think these were mostly due to poor coding. While porting, my main focus was on getting things working, and quickly. There was so much to do, and the deadline was so inflexible, that I didn't have the time to worry about optimization.
Now I'm tackling the largest problems, and the response time on these pages has gone down significantly. So hierarchical queries is not the problem.
Actually, the system load may may not due to be those pages at all, but queries that get hung for some reason. Every day this week, I've looked at the monitoring page:
and found that there are queries that are hanging. I'm not sure why it keeps happening, or what is causing it. It does look like it is happening on two pages in project-manager, both of which use list-builder, and have fairly expensive queries.
Conn# IP State Method URL # Seconds Bytes
167 220.127.116.11 running GET /intranet/project-manager/index 10825.447178 -186
1326 192.168.1.57 running GET /intranet/project-manager/tasks 198.430493 -180
1361 18.104.22.168 running GET /acs-admin/monitor 0.72462 0
How could something be returning negative bytes?
I'm dealing with some nasty performance tonight. I tried switching on performance mode and got a pile of error messages. Nice to hear that some are just cache misses, but I also have a collection of these:
[11/Sep/2005:19:08:39][17774.114696][-conn4-] Notice: RP (2950.736 ms): error in rp_handler: can't read "tcl_url2file(/dotlrn/)": no such variable
dotlrn definitely exists, resolves fine _mostly_. Likewise with a bunch of custom code, ie:
[11/Sep/2005:19:08:38][17774.180236][-conn8-] Notice: RP (1895.061 ms): error in rp_handler: can't read "tcl_url2file(/dotlrn/classes/vtchemistry/generalchemistry/drsariskyschemistry1035/chem-assess/question-feedback)": no such variable
Any suggestion for where to look for this? I'd -really- like to switch over to performance mode - I need all the help I can get on this server's load!
Incidentally, anyone know why this is done with a global variable rather than an nsv_array? Seems like it would bootstrap the caching a lot quicker that way. Perhaps, like tcl_site_nodes, it could fall back to an nsv_array.