"This test is ridiculous..."

Started by xytroxon, July 15, 2013, 03:16:18 PM

Previous topic - Next topic

xytroxon

Found this interesting programming language comparison being disgusted er. "discussed" on the Lua newsgroup.



Perl, Python, Ruby, PHP, C, C++, Lua, tcl, javascript and Java comparison

http://raid6.com.au/~onlyjob/posts/arena/">//http://raid6.com.au/~onlyjob/posts/arena/



This comparison consists of three parts:

Part 1: Speed.

Part 2: Memory usage.

Part 3: Language features.



Roberto Ierusalimschy, the the co-creator of "slowpoke" Lua, was not amused by the results...


Quote
From: Roberto Ierusalimschy <roberto <at> inf.puc-rio.br>

Subject: Re: Perl, Python, Ruby, PHP, C, C++, Lua, tcl, javascript and Java comparison

Newsgroups: gmane.comp.lang.lua.general

Date: 2013-07-15 17:24:28 GMT (4 hours and 17 minutes ago)



> Maybe this should be of interest:

>

> http://raid6.com.au/~onlyjob/posts/arena/">http://raid6.com.au/~onlyjob/posts/arena/



This test is ridiculous. Whoever wrote the code does not know how

to program in Lua (or in Java or in Javascript).



-- Roberto


I would like to hear Lutz's expert opinion on the test code methods.



-- xytroxon
\"Many computers can print only capital letters, so we shall not use lowercase letters.\"

-- Let\'s Talk Lisp (c) 1976

Lutz

#1
Benchmarking is a complicated business. These people here http://benchmarksgame.alioth.debian.org/">http://benchmarksgame.alioth.debian.org/ really understand this.



What the people here http://raid6.com.au/~onlyjob/posts/arena/">http://raid6.com.au/~onlyjob/posts/arena/ do well, is comparing one specific algorithm over different languages. They measure not only speed but also problem-size and memory usage.



But their benchmark doesn't tell us much about the capabilities of the different languages in general. Different kind of problems have different kind of solutions in different programming languages. The alioth.debian.org people write about this somewhere on there site and for some benchmarks they allow language specific solutions and in other problems they ask for the solution to be as close as possible to the algorithm they are prescribing.



When I did these: http://www.newlisp.org/benchmarks/">http://www.newlisp.org/benchmarks/ , I remember that just moving from one platform or OS to another, results could look totally different. Even when staying on the same hardware, e.g. testing on OSX versus, Windows XP, versus Linux on the same machine results can vary dramatically.



They should do that same analysis on a bigger variety of problems and not only compare the same algorithm but also allow language specific solutions for the same problems. They should also investigate different platforms and OSs. Of course that is a lot of work.

xytroxon

#2
Thank you Lutz.



While impressive looking with source codes,  tables, and graphs, this line gave me pause:


QuoteUnexpected result: somehow Perl5 managed to finish faster than C. This came as unforeseen surprise which I found difficult to explain.


Or to believe... (Unless you are a Perl fanboy writing questionably valid tests in other languages ;)



-- xytroxon
\"Many computers can print only capital letters, so we shall not use lowercase letters.\"

-- Let\'s Talk Lisp (c) 1976