Multi-threaded newLISP webserver with net-eval

Started by hilti, September 01, 2011, 09:02:51 AM

Previous topic - Next topic

hilti

Hi Guys!



Is it possible to setup a multi-threaded webserver with net-eval? I've studied the "mapreduce example" where a worker node splits up a task to several worker nodes by using net-eval.



But I can't figure out the starting point in splitting up http requests.



Your help is needed, please.



All the best

Hilti
--()o Dragonfly web framework for newLISP

http://dragonfly.apptruck.de\">http://dragonfly.apptruck.de

Lutz

#1
You could start the first server using 'command-event'. The parameter passed in the 'command-event' function is the HTTP request.



Perhaps the easiest would be to run newLISP under the inetd utility on a UNIX box:



http://www.newlisp.org/downloads/newlisp_manual.html#inetd_daemon">http://www.newlisp.org/downloads/newlis ... etd_daemon">http://www.newlisp.org/downloads/newlisp_manual.html#inetd_daemon



this way a request automatically starts a newLISP demon process if required. newLISP's low memory requirements and quick start-up make this an efficient method.



You could also use something like Squid or a similar proxy server to route requests to different newLISP server processes already running.

hilti

#2
Hi Lutz!



Thanks for Your suggestions. I tried the inetd approach. Here are some results with a very simple CGI output under "siege" stress test.



CGI "index.cgi"

#!/usr/bin/newlisp
 
(print "Content-type: text/htmlrnrn")
(println "<h2>Hello World</h2>")
(exit)


I started with "100 concurrenting users" in 10 seconds.



Result inetd on localhost:4711

Lifting the server siege...      done.
Transactions:         723 hits
Availability:      100.00 %
Elapsed time:        9.03 secs
Data transferred:        0.01 MB
Response time:        1.16 secs
Transaction rate:       80.07 trans/sec
Throughput:        0.00 MB/sec
Concurrency:       93.05
Successful transactions:         723
Failed transactions:           0
Longest transaction:        1.28
Shortest transaction:        0.04


Result newLISP webserver on localhost:8080



Lifting the server siege...      done.
Transactions:         631 hits
Availability:      100.00 %
Elapsed time:        9.05 secs
Data transferred:        0.01 MB
Response time:        1.32 secs
Transaction rate:       69.72 trans/sec
Throughput:        0.00 MB/sec
Concurrency:       91.97
Successful transactions:         631
Failed transactions:           0
Longest transaction:        1.65
Shortest transaction:        0.02


Result local OSX webserver with PHP printing out "hello world"



Transactions:       13749 hits
Availability:      100.00 %
Elapsed time:        9.10 secs
Data transferred:        0.15 MB
Response time:        0.05 secs
Transaction rate:     1510.88 trans/sec
Throughput:        0.02 MB/sec
Concurrency:       70.24
Successful transactions:       13846
Failed transactions:           0
Longest transaction:        0.33
Shortest transaction:        0.00


My goal is to outplay PHP. Maybe I have to develop a "mod_newlisp" module like "mod_php5" or something.



-Marc
--()o Dragonfly web framework for newLISP

http://dragonfly.apptruck.de\">http://dragonfly.apptruck.de

Lutz

#3
It's interesting to see, that newLISP server on its own or running it via inetd, is not much of a difference. But that is because of the small CGI test-page. If you would have a page with a longer processing time (e.g. 2 secs each) then the inetd  approach would show a much bigger advantage, because you would be able to run several newLISP servers at the same time, and accepting connections.



The local Apache server on OSX with PHP mod_php will always be much faster than newLISP server with CGI, newLISP when doing CGI, loads an extra newLISP process for each CGI request. But a newlisp_mod for Apache should be easy to built.



In the end it will be difficult for newLISP HTTP server to outperform any other HTTP web server specifically written for this purpose, i.e. Apache. newLISP HTTP mode was added on as a quick and handy web server set-up for testing, low volume and for use in embedded systems.