newLISP now on Graham's list

Started by HPW, September 20, 2004, 06:52:40 AM

Previous topic - Next topic

Qrczak

#30
Quote from: "newdep"As I say......"A programming language is a mirror of the creator"...



The source code is included...bring me the ultimate or bring me Kogut!

I'm not sure if I understand you. Anyway, I'm afraid this forum is not appropriate for talking about other languages. I am happy to discuss Kogut, but it's off-topic here.



I only wanted to disagree with the claim that 32 bits should be enough for everybody, even though I agree that they are enough in 99% of cases. But the programmer should not have to stand on its head and interface to low-level C libraries in order to obtain correct results from basic arithmetic on large integers.



BTW, interfacing to GMP from newLisp would be very hard, unless I misunderstood how interfacing to C is generally done in newLisp. Besides the memory management issue, and besides the problem that newLisp wants to represent every foreign object as an amorphic integer corresponding to its physical address in memory: many GMP functions are in fact inline functions or macros, so you can't dlsym() them; and their internal names changed once during GMP development (the official names are actually #defined aliases).



It's also suboptimal to perform all computation on bignums. It's better to change to bignums automatically when the result no longer fits in machine word; in other words to allocate the type (fixnum or bignum) dynamically, not statically. You can't even implement custom operators which do it, without wrapping bignums somehow, because they are indistinguishable from seemingly unrelated small integers (their addresses)!



Bignums are just one small issue among many. The whole object representation thing, that it copies everything by value, and that it's the object which in itself contains a link to the next element of the (only) list it is on, is much more serious. And deeply hardwired in newLisp's paradigm, so unlikely to change. Tcl is the only other language I know which forbids sharing of objects. For me it's a braindead idea. It leads to emulating object references by opaque identifiers which are implicitly associated with globally visible data with explicit memory management (names of global variables in Tcl, contexts or physical addresses of C objects in newLisp). This cancels the claim of automatic memory management.



It should not be called newLisp if it does not have cons cells. It's not a Lisp, it only resembles it syntactically. And I consider syntax the worst aspect of Lisp; yes, I know about macros.



BTW, the 10GB file was hypothetical, but I really have a 21GB file here: a backup of my home directory, kept on another drive (as .tar.bz2). And I'm currently downloading an MPEG of 2,114,682,880 bytes (50 hours with my Internet connection); this fits in 31 bits, but is very close to overflow.



BTW^2, an unrelated bug: "newlisp -e" (without further parameter) segfaults.

newdep

#31
Qrczak, you sound like someone how has knowhow about theory of languages

and perhpas practise too...still I dont understand the "complain" you "seem" to

have about newlisp...oke oke nothing is perfect, perhpas the statements should

be changed in the language overview?



Perhpas it is just the fact that you would have liked to see newlisp with

those features your are missing? I think Lutz can use some help on those..



Anyway, i dont like talking about a programming languages much, I use them,

and practise proofs if its effective or not..Still if im missing someting im building

it, if i cant im asking for it., If its impossible i'll wait or switch..



Somehow i think you are more into talking about efficiency in a language,

but if all languages where bullit-proof it would be very borring too?



I dont do much with bignums It dont need it, but would i need it i would switch

to a different language, Like i do from Spanish to Frensh to German to Dutch to English when it fits...



Well i take my finger off this topic, I dropped C already 15 years ago ;-)



Hope you can give newlisp some spirit..or perhpas try i.e. Rebol...:-)



Norman.
-- (define? (Cornflakes))

Lutz

#32
Who cares about 'cons' cells if you can solve programming problems without it. One concept less to learn, when learning the language.



Who cares about 'garbage collection' if you can to automatic memory management a different more efficient way.



A programming language is not validated by its underlying theoretical concepts but by:



(1) ease of use to express a pogramming solution to a problem



(2) execution speed and efficiency of memory usage



Number (1) is partly a question of personal preference for syntax and also depends on the API/function repertoir offered. The LISP syntax has a great aesthetic value to it, which some see, some don't. I is also easy learn and to process by other programs.



For (2) goto http://newlisp.org/benchmarks/">http://newlisp.org/benchmarks/ for speed comparisons or go to http://shootout.alioth.debian.org/">http://shootout.alioth.debian.org/ and select the 'scorecard' tab then recalculate setting the "Memory scrore multiplier" to 1. You will see that newLISP comes out on top before most other languages, including some compiled ones.



Lutz

pjot

#33
I agree with Lutz. I've been programming for 20 years now, and I've used many languages, from assembly to C++ to Prolog. After all these years I am happy to have found this powerfull language newLisp, with which I can do virtually everything on any platform, in a fast and efficient way.



Peter

jsmall

#34
And I can write (cons 1 '(2 3)) and get (1 2 3).

This looks okay to me.

Qrczak

#35
I'm indeed more a theoretician than a practician, although I aim at creating a practical product this time. It is 10 times younger than newLisp, but already works well and hosts its own compiler.



Regarding newLisp, I'm afraid it is (from my point of view) broken beyond hope of repair. It could be improved in minor ways, e.g. by distinguishing foreign objects from integers and allowing to attach finalizers to them, but it's too little, to late.



Perhaps "a sane language" is incompatible with "tiny runtime". newLisp is a bit like a syntactic sugar over C: it inherits C problems of lack of safety against segfaults, poor type system which does't generally distinguish values with similar representation, and providing only features which easily map into hardware. It lacks garbage collection, function closures, exceptions, threads, continuations, generic functions, bignums (or even checked overflow on fixnums), rationals, sensible hash tables (contexts with explicitly managed lifetime and only strings as keys don't count), sensible objects with fields (not bound to global variables).



The main criterion for having or not having a feature seems to be "is it trivial to implement in C?". Many functions are thin wrappers around C library or system calls, inheriting even the style of error handling (remember to check for nil as result, obtain a cryptic error code of last operation which failed with a separate function call). "Tiny runtime" is good in general, but here it is at the cost of everything else.



Syntax from Lisp, object model from Tcl, error handling from Perl, safety from C - great combination...

pjot

#36
Quote
Syntax from Lisp, object model from Tcl, error handling from Perl, safety from C - great combination...


It is a great combination, isn't it? :-)

Qrczak

#37
Quote from: "Lutz"Who cares about 'cons' cells if you can solve programming problems without it. One concept less to learn, when learning the language.



Who cares about 'garbage collection' if you can to automatic memory management a different more efficient way.

What is the largest application written in newLisp? A wiki with 1333 lines of code?



Garbage collection doesn't matter if you can keep the whole program state in a few dozens of global variables. Data structures don't matter if the program only works with numbers, strings, and lists of strings. Lack of object references doesn't matter if all data the program operates on are so simple.



Does it really use only global variables?! Argh! Yes, for very small programs it's often sufficient. But this style doesn't scale. Perl used to be like this 10 years ago, and this fortunately changed with Perl5.



But programming problems don't consist only of toy problems which can be solved in a thousand of lines.


Quote from: "Lutz"For (2) goto http://newlisp.org/benchmarks/">http://newlisp.org/benchmarks/ for speed comparisons or go to http://shootout.alioth.debian.org/">http://shootout.alioth.debian.org/ and select the 'scorecard' tab then recalculate setting the "Memory scrore multiplier" to 1. You will see that newLISP comes out on top before most other languages, including some compiled ones.

Well, if I don't set the memory multiplier to 1, newLisp comes slower than Java, Python, OCaml bytecode and Perl. And it uses little memory because it provides few constructs to the programmer, which doesn't matter for toy benchmarks.



For "List Processing" newLisp cheats, it does not implement the spec: it always removes and prepends the element at the beginning instead of at the end where the spec requires. Look at the Bigloo implementation to see what was intended.



(Never mind that I don't like most Shootout benchmarks at all. They are too small; they too often require "the same way" instead of "the same thing", which yields unnatural code for many languages.)



And I think in newLisp it's impossible to implement it in O(N) complexity. It will need O(N^2) for all non-trivial recursive list processing functions. Except those which only change a global list at the beginning, without ever passing it as an argument, and those which are predefined; luckily only such functions were required in the wrong implementation of this task.



How well it will perform if it implemented the shootout honestly? Could I see only the "List Processing" benchmark fixed?



Other "benchmarks", even if implemented honestly, often don't tell the whole truth either. For example matrix multiplication. It would seem that newLisp is well suited for matrix processing. Never mind that you can't add two matrices in O(N^2), because unlike matrix multiplication it's not a builtin, and implementing it in newLisp would probably need O(N^3) time. Never mind that the whole matrix is physically copied when passing as a function argument - but a toy benchmark operates on a single global matrix and will not reflect that.

Lutz

#38
Just what is your obsession? If you dislike newLISP so much, why are you so much concerned about it, reading the source, counting lines in newlisp-wiki, looking for things to complain about. Is it envy?



Relax, loosen up your mind, write some LISP code, those little parenthesis can make you a happier person :-), enjoy life ...



Lutz

Qrczak

#39
Actually I started criticism because of the name "newLisp". If it was some random language, I would not notice, but it tries to paint itself as a successor of Lisp, while in my eyes it does not qualify as a Lisp at all, and uses ideas abandoned years ago.



Anyway, the problem with unfair benchmark is objective. I'm assuming it was a mistake. Could somebody please fix the list processing entry to actually conform to the spec, and add/remove elements at the right end too where appropriate? The list doesn't have to be represented by a builtin list type (and in fast several languages use a custom implementation because e.g. their primary list type is immutable). I'm curious whether my supposition about newLisp's inability to solve this in O(N) time is true.

newdep

#40
I believe you are using telepathy to transport your words to your keyboard?





An expression like "and uses ideas abandoned years ago. "

is not very smart...
-- (define? (Cornflakes))

Qrczak

#41
Quote from: "newdep"I believe you are using telepathy to transport your words to your keyboard?



An expression like "and uses ideas abandoned years ago. "

is not very smart...

So, is it possible to implement the List Processing shootout entry correctly and efficiently in newLisp, so the comparison can be fair? I'm not going to leave it this way.

rickyboy

#42
Lutz,



I completely  understand your exasperation with the Qrczak postings.  His motivations do not seem genuinely constructive, but more along the lines of having an axe to grind (or maybe an axe to swing in this case).  A read of the Kogut FAQ (http://kokogut.sourceforge.net/faq.html">//http://kokogut.sourceforge.net/faq.html) may reveal something about the psyche of our buddy.  Here are a couple of quotes from the FAQ followed by my gratuitous comments ('cause I felt like it).


QuoteTo many programmers, including the Kogut author, sexprs are also aesthetically displeasing because they lead to very deep nesting of parentheses and many levels of indentation.

What a lame argument.  And unoriginal, to boot.


QuoteMarcin 'Qrczak' Kowalczyk is the creator of Kogut. So far, he's written all of the code by himself, which is impressive considering the shear volume of it.

He, he, he.  I love the self-aggrandizement here.   You better watch out, you newLISP folks -- you are dealing with an "impressive" genius here.   ;-)  



--Ricky
(λx. x x) (λx. x x)

HPW

#43
Ongoing discussions on comp.lang.lisp:



http://groups.google.de/group/comp.lang.lisp/browse_frm/thread/0b92742832c979d3/55c561c642d1f5da?hl=de#55c561c642d1f5da">http://groups.google.de/group/comp.lang ... c642d1f5da">http://groups.google.de/group/comp.lang.lisp/browse_frm/thread/0b92742832c979d3/55c561c642d1f5da?hl=de#55c561c642d1f5da
Hans-Peter

eddier

#44
I really admire Lutz and I think he is genuinely a nice guy,  but Qrczak points out two particular problems with newlisp for my current problems: (1) Copying the whole data structure on each function invocation makes my recursive algorithms very memory inefficient (most of the time worse than O(N^2)) since the structures I'm passing are usually more complicated than lists. When I run these recursive functions in mzscheme for example, they run much faster (although I'm sure tail call optimization contributes to the speedup as well.) (2) Using the context reference mechanism for passing arguments doesn't solve the memory problem because there is no way to free the unused memory.



It is not my intention to disrespect Lutz however, as I still use newlisp for small one-shot programs.



Eddie