Hi,
I love get-url and what it makes available for coding!
It seems to work on some sites and others not so much.
I have never had any issue before with using get-url for http calls.
For example, on both nearlyfreespeech.net and my laptop, it hangs on some sites.
(get-url "http://newlisp.org") -- works!
(get-url "http://nearlyfreespeech.net") -- doesnt't work!
(get-url "http://amazon.com") -- doesn't work!
(get-url "http://google.com") -- works!
When it doesn't work, it just hangs.
I am guessing this is a server blocking issue for get-url requests?
On my nearlyfreespeech server, the logs say:
[Fri Sep 11 05:52:46.179236 2020] [cgi:warn] [pid 6060:tid 34480497408] [client 174.16.51.59:0] AH01220: Timeout waiting for output from CGI script /fs6b/abc/public/go.cgi
[Fri Sep 11 05:52:46.179516 2020] [core:error] [pid 6060:tid 34480497408] (70007)The timeout specified has expired: [client 174.16.51.59:0] AH00574: ap_content_length_filter: apr_bucket_read() failed
newLISP v.10.7.1 32-bit on Linux IPv4/6 UTF-8 libffi
newLISP v.10.7.5 64-bit on BSD IPv4/6 UTF-8
Thank you much for any quick tips on this! :D
At a guess, those websites that don't work, they probably are doing a "redirect" to the https ssl/tls version of the page, and get-url doesn't do ssl/tls at this time. To make it work, newlisp would have to drag in an entire ssl library, which could raise the size of the binary considerably. gambit scheme includes ssl support in its binary, but the binary is bigger than newlisp.
Yes, some sites redirect from http to https. On Mac, Linux and other UNIX like system you could use:
(exec "curl 'https://www.amazon.com'")
Not sure if the curl utility is also available on Windows.
Or you could use the library module from here: http://www.newlisp.org/code/curl.lsp
This module was contributed by a user a long time ago, have never used it myself.
I've been updating the curl module; guess I'll have to send it in soon. I got it working to a point it could log in to some forums and do some backups. The forum is long gone so I'll have to double check again to see if there is any bit-rot.
Ok thanks Lutz and TedWalther!
I got json-parse to work just great w Lutz's suggestion to use exec and curl:
(json-parse (join (exec "curl 'https://www.mysite.com/my.json'")))
Much appreciated to both! :o)