util::http::curl::request (private)
util::http::curl::request [ -url url ] [ -method method ] \ [ -headers headers ] [ -body body ] [ -body_file body_file ] \ [ -delete_body_file ] [ -files files ] [ -timeout timeout ] \ [ -depth depth ] [ -max_depth max_depth ] [ -force_ssl ] \ [ -gzip_request ] [ -gzip_response ] [ -post_redirect ] [ -spool ]
Defined in packages/acs-tcl/tcl/http-client-procs.tcl
Issue an HTTP request either GET or POST to the url specified. This is the curl wrapper implementation, used on AOLserver and when ssl native capabilities are not available.
- Switches:
- -url (optional)
- -method (optional, defaults to
"GET"
)- -headers (optional)
- specifies an ns_set of extra headers to send to the server when doing the request. Some options exist that allow one to avoid the need to specify headers manually, but headers will always take precedence over options.
- -body (optional)
- is the payload for the request and will be passed as is (useful for many purposes, such as webDav). A convenient way to specify form variables for POST payloads through this argument is passing a string obtained by 'export_vars -url'.
- -body_file (optional)
- is an alternative way to specify the payload, useful in cases such as the upload of big files by POST. If specified, will have precedence over the 'body' parameter. Content of the file won't be encoded according with the content type of the request as happen with 'body'
- -delete_body_file (optional, boolean)
- decides whether remove body payload file once the request is over.
- -files (optional)
- curl is natively capable to send files via POST requests, and exploiting it can be desirable to send very large files via POST, because no extra space will be required on the disk to prepare the request payload using this feature. Files by this parameter are couples in the form '{ form_field_name file_path_on_filesystem }'
- -timeout (optional, defaults to
"30"
)- Timeout in seconds. The value can be an integer, a floating point number or an ns_time value. Since curl versions before 7.32.0 just accept integer, the granularity is set to seconds.
- -depth (optional, defaults to
"0"
)- -max_depth (optional, defaults to
"10"
)- is the maximum number of redirects the proc is allowed to follow. A value of 0 disables redirection. When max depth for redirection has been reached, proc will return response from the last page we were redirected to. This is important if redirection response contains data such as cookies we need to obtain anyway. Be aware that when following redirects, unless it is a code 303 redirect, url and POST urlencoded variables will be sent again to the redirected host. Multipart variables won't be sent again. Sending to the redirected host can be dangerous, if such host is not trusted or uses a lower level of security.
- -force_ssl (optional, boolean)
- is ignored when using curl HTTP client implementation and is only kept for cross compatibility.
- -gzip_request (optional, boolean)
- informs the server that we are sending data in gzip format. Data will be automatically compressed. Notice that not all servers can treat gzipped requests properly, and in such cases response will likely be an error.
- -gzip_response (optional, boolean)
- informs the server that we are capable of receiving gzipped responses. If server complies to our indication, the result will be automatically decompressed.
- -post_redirect (optional, boolean)
- decides what happens when we are POSTing and server replies with 301, 302 or 303 redirects. RFC 2616/10.3.2 states that method should not change when 301 or 302 are returned, and that GET should be used on a 303 response, but most HTTP clients fail in respecting this and switch to a GET request independently. This option forces this kinds of redirect to conserve their original method. Be aware that curl allows the POSTing of 303 requests only since version 7.26. Versions prior than this will follow 303 redirects by GET method. If following by POST is a requirement, please consider switching to the native HTTP client implementation, or update curl.
- -spool (optional, boolean)
- enables file spooling of the request on the file specified. It is useful when we expect large responses from the server. The result is spooled to a temporary file, the name is returned in the file component of the result.
- Returns:
- the data as dict with elements 'headers', 'page', 'file', 'status', 'time' (elapsed request time in ns_time format), and 'modified'.
- Partial Call Graph (max 5 caller/called nodes):
- Testcases:
- No testcase defined.
Source code: set this_proc [lindex [info level 0] 0] if {![regexp "^(https|http)://*" $url]} { return -code error "${this_proc}: Invalid url: $url" } if {$headers eq ""} { set headers [ns_set create headers] } # Determine whether we want to gzip the request. # Default is no, can't know whether the server accepts it. # We could at the HTTP API level (TODO?) set req_content_encoding [ns_set iget $headers "content-encoding"] if {$req_content_encoding ne ""} { set gzip_request_p [string match "*gzip*" $req_content_encoding] } elseif {$gzip_request_p} { ns_set put $headers "Content-Encoding" "gzip" } # Curls accepts gzip by default, so if gzip response is not required # we have to ask explicitly for a plain text encoding set req_accept_encoding [ns_set iget $headers "accept-encoding"] if {$req_accept_encoding ne ""} { set gzip_response_p [string match "*gzip*" $req_accept_encoding] } elseif {!$gzip_response_p} { ns_set put $headers "Accept-Encoding" "utf-8" } # zlib is mandatory when compressing the input if {$gzip_request_p} { if {[namespace which zlib] eq ""} { return -code error "${this_proc}: zlib support not enabled" } } ## Encoding of the request # Any conversion or encoding of the payload should happen only at # the first request and not on redirects if {$depth == 0} { set content_type [ns_set iget $headers "content-type"] if {$content_type eq ""} { set content_type "text/plain; charset=[ns_config ns/parameters OutputCharset iso-8859-1]" } set enc [util::http::get_channel_settings $content_type] if {$enc ne "binary"} { set body [encoding convertto $enc $body] } if {$gzip_request_p} { set body [zlib gzip $body] } } ## Issuing of the request set cmd [list exec [::util::which curl] -s -k] if {$spool_p} { set spool_file [ad_tmpnam] lappend cmd -o $spool_file } else { set spool_file "" } if {$timeout ne ""} { lappend cmd --connect-timeout [timeout $timeout] } # Antonio Pisano 2015-09-28: curl can follow redirects # out of the box, but its behavior is to throw an error # when maximum depth has been reached. I want it to # return even a 3** page without complaining. # # Set redirection up to max_depth # if {$max_depth ne ""} { # lappend cmd -L --max-redirs $max_depth # } if {$method eq "GET"} { lappend cmd -G } # Files to be sent natively by curl by the -F option foreach f $files { if {[llength $f] != 2} { return -code error "${this_proc}: invalid -files parameter: $files" } set f [join $f "=@"] lappend cmd -F $f } # If required, we'll follow POST request redirections by GET if {!$post_redirect_p} { lappend cmd --post301 --post302 if {[apm_version_names_compare [version] "7.26"] >= 0} { lappend cmd --post303 } } # Curl can decompress response transparently if {$gzip_response_p} { lappend cmd --compressed } # Unfortunately, as we are interacting with a shell, there is no # way to escape content easily and safely. Even when body is # passed as a Tcl variable, we just write its content to a file # and let it be read by curl. set create_body_file_p [expr {$body_file eq ""}] if {$create_body_file_p} { set wfd [ad_opentmpfile body_file http-spool] fconfigure $wfd -translation binary puts -nonewline $wfd $body close $wfd } lappend cmd --data-binary "@${body_file}" # Return response code together with webpage lappend cmd -w " %\{http_code\}" # Add headers to the command line foreach {key value} [ns_set array $headers] { if {$value eq ""} { set value ";" } else { set value ": $value" } set header "${key}${value}" lappend cmd -H "$header" } # Dump response headers into a tempfile to get them set resp_headers_tmpfile [ad_tmpnam] lappend cmd -D $resp_headers_tmpfile lappend cmd $url #ns_log notice "running CURL cmd\n$cmd" set start_time [ns_time get] set response [{*}$cmd] set end_time [ns_time get] # elapsed time set time [ns_time diff $end_time $start_time] # Parse headers from dump file set resp_headers [ns_set create resp_headers] set rfd [open $resp_headers_tmpfile r] while {[gets $rfd line] >= 0} { set line [split $line ":"] set key [lindex $line 0] set value [join [lrange $line 1 end] ":"] ns_set put $resp_headers $key [string trim $value] } close $rfd # Get values from response headers, then remove them set content_type [ns_set iget $resp_headers content-type] set last_modified [ns_set iget $resp_headers last-modified] set location [ns_set iget $resp_headers location] # Move in a list to be returned to the caller set r_headers [ns_set array $resp_headers] ns_set free $resp_headers set status [string range $response end-2 end] set page [string range $response 0 end-4] # Redirection handling if {$depth < $max_depth} { incr depth set redirection [util::http::follow_redirects -url $url -method $method -status $status -location $location -body $body -body_file $body_file -delete_body_file=$delete_body_file_p -headers $headers -timeout $timeout -depth $depth -max_depth $max_depth -force_ssl=$force_ssl_p -gzip_request=$gzip_request_p -gzip_response=$gzip_response_p -post_redirect=$post_redirect_p -spool=$spool_p -preference "curl"] if {$redirection ne ""} { return $redirection } } if {$spool_file ne ""} { set page "${this_proc}: response spooled to '$spool_file'" } # Translate into proper encoding set enc [util::http::get_channel_settings $content_type] if {$enc ni [list "binary" [encoding system]]} { set page [encoding convertfrom $enc $page] } # Delete temp files file delete -- $resp_headers_tmpfile if {$create_body_file_p || $delete_body_file_p} { file delete -force -- $body_file } return [list headers $r_headers page $page file $spool_file status $status time $time modified $last_modified]XQL Not present: Generic, PostgreSQL, Oracle