This is ridiculous

Started by Qrczak, October 01, 2004, 03:54:57 AM

Previous topic - Next topic

Qrczak

The name "newLISP" would suggest that it builds on old Lisp experience, but in fact it ignores Lisp history and uses rules which have been tried and rejected as bad ideas, like dynamic scoping by default.



"Excepting symbols and built-in primitives, each object is referenced only once in the system and is always passed by value." - This implies that a recursive function which conses an element on each iteration to a growing list, or even descends a list by iterating 'rest', has O(N^2) complexity, or worse if elements themselves are big.



This also makes impractical to create graphs of mutable objects: you can't refer to a part of it for further modification because it will be implicitly copied. IMHO this very fact makes it not a Lisp at all. Every Lisp, and in fact most modern languages, pass objects by reference and collect unreachable garbage. In every Lisp you can look at the rest/cdr/tail of a list without copying it.



"Many of newLISP's built-in functions are polymorphic in type, accepting a variety - and sometimes all - data types and thereby greatly reducing the total number of different functions and syntactic forms necessary to be learned and implemented." - Except arithmetic, where it would make the most sense. Not just the defaults are limited, but there are no arithmetic operators which do the right thing (return an exact answer when possible).



No wonder why correctly behaving arithmetic is not provided: having only fixed-range integers, without even checking for overflow, and floats, is pathetic. This set of types just can't exactly represent 1/10 nor the size of my hard drive in bytes. Is it Lisp or C?



cons which does a completely different thing depending on whether the second argument is a list is stupid. If there are improper lists, then fine - make it an error if it's not a list. But making a two-element list *except* when the second argument was a list makes absolutely no sense, because if you rely on cons making a two-element list, you will badly surprised if the second argument happens to be a list. There is no context where you really want either this or that, in all cases you statically know which behavior you want, so it makes no sense to put them both in one function. I bet that the two-element behavior of cons is not used in practice, because it's not robust.



A hash table is associated with a global variable. What?! I can't make a list of hash tables without inventing unique names for them? This applies to objects as well. Is it Lisp or Tcl?



The implementation of hash tables is interesting:

> (load "modules/hash.lsp")

true

> (HASH:make 'h)

h

> (h:put "(" 'open)

open

> (h:get ")")

open

...Huh?!



"Unless otherwise specified for a function, an index greater than N-1 returns the last element, and an index less than -N returns the first when in lists, but out of bounds indices in arrays will cause an error message." - long live the consistency. I see no purpose in returning the last element of a list when the index is too large - it will just mask programming errors.



"constant: Works exactly like set but protects the symbol from subsequent modification. A symbol set with constant can only be modified again using constant." - Funny, a constant which is not constant.



"If num is supplied, dec always returns the result as floating point even for integer input arguments." - Why?



"If the expression in exp evaluates to a string, it will be replicated int-n times in a string and returned. For any other type in exp a list of int-n of the evaluation of exp is returned." - How to make a list with N references to the given string (sorry, N copies of the string)? Again an inconsistent behavior which doesn't make sense - if you want to make a list using dup, ensure that the element is *not* a string, because you will get a different thing in this case.



A similar problem is inherent in flat. The documentation says: "To convert a list back to an array apply flat to the list". It will break if the array is supposed to contain lists as some elements. These operations which do a particular thing for almost all values, *except* when values are of a particular type, are just misdesigned.



"get-char: Gets a character from an address specified in int-address." - An address represented in 32 bits? So it will not work on 64-bit machines. What a pity. C used to almost unify pointers with integers about 30 years ago.



File handles represented as ints and I/O operations using system calls directly, with no buffering? I knew that this is C 30 years ago.



In general, I'm disgusted. There is some work in wrapping C library functions, but the core language design, number and list processing is awful. The name newLisp is misleading: it's not a Lisp and it's not modern. It tries hard to have neither garbage collection nor pointers, which implies that pointers must be simulated using unique constants which denote globally accessible structures, as if it was Tcl or ancient Fortran. It throws away the progress made in programming language design in the last 30 years.

pjot

#1
Hi Qrczak,



It seems that you are arguing with your own pre-occupations.



"Lisp, and in fact most modern languages, pass objects by reference and collect unreachable garbage."



In fact, only C-pointers pass objects by reference and only Java/C++ know a kind of garbage collecting. Almost all interpreted languages pass 'objects' by value.



"get-char: Gets a character from an address specified in int-address." - An address represented in 32 bits? So it will not work on 64-bit machines. What a pity."



Why don't you read better. If you compile newLisp on a 64-bit architecture, guess what happens?



"File handles represented as ints and I/O operations using system calls directly, with no buffering? I knew that this is C 30 years ago."



In fact, there is buffering; read the manual better. I can go on and sum up your unreasonable statements, refuting them one by one - but it's too tiring.



"In general, I'm disgusted. There is some work in wrapping C library functions, but the core language design, number and list processing is awful."



NewLisp is the best interpreted language currently available. It's fast, elegant, multiplatform, stable, and has eveything you need to program a quick solution.



If you do not want to use newLisp, just stay away.



Peter

newdep

#2
If the water is too salty, dont drink it !
-- (define? (Cornflakes))

HPW

#3
Hi Qrczak,



I do not know why the lisp-communtiy has such a problem with newLISP?

Is it only the name?

But it is not named 'newAnsiCommonLisp'!

It is not targeted to compete with this standard in any way.

Is 'ARC' a good name for something new in lisp?

Will it be usable?Will it be big? Will it be for mainstream use? Will it be free?



You say newLISP is a lisp from 30 years ago.

Maybe it has more similaritys with this lisps (as the whole family),

but is that bad at all? Even that lisps had nice features.



On the other side compare newLISP to some other lisps from Graham's site.

Xlisp for example is a dead language. Autolisp helped to make Autocad

what it is today and has thousands of aktiv developers worldwide.

From the language autolisp isn't sophisticated at all and has much less

language-features than newLISP. Maybe there are more autolisp users than common-lisp users.



Size: newLISP is incredible small. This is the field where no common-lisp could compare.

A interpreter with less than 100 KB filesize is really hard to beat.

With load-time it is the same. You can embed it in any popular

development-enviroment on each platform.

Easy embedding in MS Office included.



My motto: Use the right tool to get the job done!

(There are jobs for AnsiCommonLisp, but also a lot for newLISP)



Just my 2 cents. (Euro)
Hans-Peter

Qrczak

#4
Quote from: "pjot"In fact, only C-pointers pass objects by reference and only Java/C++ know a kind of garbage collecting. Almost all interpreted languages pass 'objects' by value.


Which scripting languages? Python, Ruby, Icon pass objects by reference. Perl passes them by reference when calling functions and makes a copy when storing in a data structure (but it has explicit references). Only Tcl passes them by value (AFAIK) and Unix shell which doesn't have any other type than string (and sometimes array).



Besides scripting languages, all funtional languages (e.g. SML, OCaml, Haskell, Clean, Erlang), all Lisp and Scheme dialects including Dylan, also Prolog, Smalltalk, Self, Java - pass objects by reference. For atomic immutable values it doesn't matter how it's passed, the behavior is the same regarding mutation in the presence of aliasing, so Java counts too. They don't make deep copies of structured data, they allow to refer to a part of a graph of objects and mutate it such that the mutation is visible in the original. They all allow to actually create a graph of objects which newLisp doesn't (except by emulating pointers by global variables with unique names, like in Tcl).



If newLisp wants to promote functional programming which doesn't use mutation in place, then fine, but cons and rest would have to be O(1).



Some languages pass some objects by reference, and others (only small objects with bounded size) by value. This includes C, Pascal, Eiffel and C#.



C++ and Tcl are the only languages I know which make implicit copies of whole nested structures. And even in C++ you can use pointers to avoid this.



Regarding GC, almost all languages, especially those designed recently, free objects automatically. Only C, Pascal, C++ and some ancient languages don't have native GC (sometimes it's painfully implemented as a library for C and C++). And some limited languages with no data structures at all, like Unix shells.



Languages with no GC and no pointers are not fun.


Quote from: "pjot""get-char: Gets a character from an address specified in int-address." - An address represented in 32 bits? So it will not work on 64-bit machines. What a pity."



Why don't you read better. If you compile newLisp on a 64-bit architecture, guess what happens?


The pointer is cast to UINT, which is a typedef for unsigned int, which is half of the size of a pointer.


Quote from: "HPW"I do not know why the lisp-commintiy has such a problem with newLISP?


It's just very badly designed, and the name suggests that the author should have known better.



(I'm not a member of Lisp community.)

pjot

#5
Icon passes by value. Furthermore, KSH, BASH, Logo, Perl, almost all BASIC dialects, PHP, Prolog (yes Prolog - I used to program GNU Prolog myself), Rebol, ZSH and so on - even CLisp. Some languages need a special command to pass by reference - the default is pass by value.



UINT is defined differently on different platforms, depending on your C-compiler.

mgandhi

#6
"First they ignore you, then they ridicule you, then they fight you, then you win."

Qrczak

#7
Quote from: "pjot"Icon passes by value.

Not true:



procedure change(l)

   l[1] := 10

end



procedure main(args)

   local l

   l := [1,2,3]

   change(l)

   write(l[1])

end



Result: 10


Quote from: "pjot"Furthermore, KSH, BASH, (...) ZSH

I said that Unix shells are exceptions. They support no data structures except strings (and sometimes arrays). They aren't suitable for large programs because of this.


Quote from: "pjot"Logo,

I programmed in Logo 15 years ago. I don't remember whether it supported mutable data at all, but BUTFIRST definitely has time complexity of O(1) - it's a descendant of Lisp after all.


Quote from: "pjot"Perl,

Not true:



sub change($) {++$_[0]}

my $x = 5;

change($x);

print "$xn";



Result: 6



...and similarly when references are used; they are always used with nested data structures.


Quote from: "pjot"almost all BASIC dialects,

I haven't used a BASIC which had procedures, where this question would me meaningful, but I'm sure that Visual Basic has object references and doesn't make implicit copies of arrays.


Quote from: "pjot"PHP,

Not true. I don't have PHP installed here, but I'm sure that you can translate the above Icon sample and obtain the same result.


Quote from: "pjot"Prolog

Not true. When a predicate unifies a type variable with some value, the effect is visible outside the predicate. Unification is the only side effect.


Quote from: "pjot"Rebol,

Not true:



>> test: func [x] [x/foo: 4]

>> obj: make object! [foo: 2]

>> obj/foo

== 2

>> test obj

== 4

>> obj/foo

== 4


Quote from: "pjot"even CLisp.

Of course not, because it's a Common Lisp implementation:



[1]> (defun test (x) (rplaca x 44))

TEST

[2]> (defvar obj (list 2))

OBJ

[3]> (test obj)

(44)

[4]> obj

(44)


Quote from: "pjot"UINT is defined differently on different platforms, depending on your C-compiler.

I can see "#define UINT unsigned int" in newlisp.h, not surrounded by any #ifdefs.

Lutz

#8
(1) 'new' in newLISP

It means, doing things in a 'new', different way. newLISP doesn't 'ignore history', it just tries to do things a different way. In some aspects it is different: scoping rules and variable passing conventions, etc., in other aspects it is not: s-expressions, anonymous functions, lists etc.



(2) Being 'different'

Because newLISP is new and different, you program LISP in it in a new and different way. Don't criticize newLISP because it is not Common Lisp or Scheme.



Don't try to program Common Lisp or Scheme in newLISP. newLISP by design is not Common Lisp or Scheme. I do not imply that Common Lisp and Scheme are bad because they are different from newLISP, they are just different.



(3) Dynamic scoping

Is not a 'bad idea' per se, it is just a different way of doing variable scoping. Millions of Perl programmers feel just fine using dynamic scoping, so do newLISP users. If you are aware how dynamic scoping works and behaves, you program 'with' it not 'against' it. Those who did serious work with newLISP have never complained about it. Where dynamic scoping could get dangerous, contexts in newLISP can separate namespaces to develop bigger programs consisting of independently developed modules.



(4) Integer and Float Arithmetik

You can redefine the integer operators (see manual) and then you get just the same number treatment you have in other scripting languages.



newLISP includes raw integer arithmetic, because it has been used controlling, diagnosing hardware, in this case you want your arithmetic to behave like a hardware register would, rolling over to negative numbers when adding/incrementing, etc.. newLISP adds low level like Integer arithmetic as a feature, if you don't like it turn it of and use +,-,*,/ like in other scripting languages. Then al functions and operators are polymorph if it comes to integers and floats.



I know that Common LISP and Scheme handle arbitrary precision numbers. newLISP is geared toward programming of practical applications. newLISP has not been designed as a tool for Mathematicians, although some think, that is well usable for that, if you know how to use it and import certain libraries (i.e. bignum). Most on this board have never used bignums in their life.



(5) Value passing

It is true that most programming languages pass arrays and lists by reference and not by value like in newLISP. newLISP has the philosophy that only symbols should be used for referencing. Strictly value passing makes newLISP fast and small and it uses less memory overall than any other scripting language (except Ocaml). Goto the 'Scorecard' tab on http://shootout.alioth.debian.org/">http://shootout.alioth.debian.org/ and change the 'Memory Score Multiplier' to '1' and recalculate, you will see newLISP coming out on top of all other scripting languages (except Ocaml).



Perhaps he hardest part in developing a programming language is memory management. Speed and memory benchmarks show you that newLISP's memory management works faster and saves resources compared to most other interactive programming languages.



(6) 'Consistency'

Consistency can be a good thing, but not always. I.e. arrays do bounds checking, lists do not. This was a conscious decision. Mostly when using arrays you deal with fixed size arrays and want the benefit of bounds checking. You work with lists because it is easy to change they size and shape. Many list accessing / modifying functions in newLISP can take multiple indices and work on complex nested lists. Many times the consistency goal sacrifices usability. It is typical in Western thinking to over emphasize consistency or orthogonality. Some Japanese assembler languages use a completely different mnemonic for every variant of an address mode; Western assemblers will typically use only one mnemonic with different modifiers for address modes.



Lutz

pjot

#9
Wel, I am tired of this "yes/no" game, this will be my last posting regarding this issue.



Concerning Perl. Reading the PERL man-page (man perlsub):



"The Perl model for function call and return values is simple: all functions are passed as parameters one single flat list of scalars, and all functions likewise return to their caller one single flat list of scalars.  Any arrays or hashes in these call and return lists will collapse, losing their identities--but you may always use pass-by-reference instead to avoid this."





So by default, Perl passes by value.



=========



Concerning Icon, the Unicon manual page 19:



"Procedure parameters are passed by value except for structured data types, which are passed by reference. This means that when you pass in a string, or cset value, the procedure gets a copy of that value; any changes the procedure makes to its copy will not be reflected in the calling procedure."



Passing by value.



============



Read the manuals to convince yourself. I don't want to quote all the manuals for all the langauges. I did not say it is IMPOSSIBLE to pass by reference, I said that passing by value is the default for most interpreted languages, including Icon and Perl.

Qrczak

#10
I'm not complaining that the behavior is just different than in Scheme or Common Lisp. I'm complaining that it repeats past mistakes of old Lisps and old Perls which were later fixed, and that it's simply not useful in several cases.


Quote from: "Lutz"(3) Dynamic scoping

Is not a 'bad idea' per se, it is just a different way of doing variable scoping. Millions of Perl programmers feel just fine using dynamic scoping, so do newLISP users.


Dynamic scoping went out of fashion in Perl too, except in rare cases. It's not bad to be available; it is a bad default. Here is what 'man perlsub' says:



       WARNING: In general, you should be using "my" instead of

       "local", because it's faster and safer.  Exceptions to

       this include the global punctuation variables, global

       filehandles and formats, and direct manipulation of the

       Perl symbol table itself.  "local" is mostly used when the

       current value of a variable must be visible to called sub­

       routines.



"local" is dynamic scoping, "my" is lexical scoping; "my" is available for 10 years.



Later, in the same manual:



       Passing Symbol Table Entries (typeglobs)



       WARNING: The mechanism described in this section was orig­

       inally the only way to simulate pass-by-reference in older

       versions of Perl.  While it still works fine in modern

       versions, the new reference mechanism is generally easier

       to work with.  See below.



       Sometimes you don't want to pass the value of an array to

       a subroutine but rather the name of it, so that the sub­

       routine can modify the global copy of it rather than work­

       ing with a local copy.  In perl you can refer to all

       objects of a particular name by prefixing the name with a

       star: *foo.  This is often known as a "typeglob", because

       the star on the front can be thought of as a wildcard

       match for all the funny prefix characters on variables and

       subroutines and such.



See how it resembles newLisp's implementation of hash tables and OO-style objects? But in Perl it has been obsolete for 10 years.

newdep

#11
That "Qrczak" guy simply does not know where he can dump his frustrations...Hahaha...



He keeps posting about newlisp, what better adverticement can you get ;-)..Hahahaha...



http://www.codecomments.com/archive274-2005-10-658487.html">http://www.codecomments.com/archive274- ... 58487.html">http://www.codecomments.com/archive274-2005-10-658487.html
-- (define? (Cornflakes))

cormullion

#12
I wonder whether there would be as much confusion and argument if newLISP had originally been called something else? Perhaps the mixing of 'new' and 'Lisp' is being interpreted as a hubristic claim that newLISP is better than old LISP...



Perhaps if it had been called "miniLISP" or "LLAMA" ("Lisp-Like Automation Macro Acessory"?) or "Thcript" or "Parens" or "SPIL" ("Small Programming in LISP") or "UncommonLisp" or "Listp" or "Lispette" or anything, really, even "Dragonfly" (that nice parenthetical logo...), there would be fewer arguments about its nature and purpose.

Lutz

#13
I know, some people are offended by the name of this language and it was not my intend to offend anyone or to take anything away from the important efforts of the people involved in Common Lisp and SCHEME.



On the right side of the newLISP logo on http://newlisp.org">http://newlisp.org you find a link "the new LISP" to http://newlisp.org/index.cgi?page=Differences_to_Other_LISPs">http://newlisp.org/index.cgi?page=Diffe ... ther_LISPs">http://newlisp.org/index.cgi?page=Differences_to_Other_LISPs . This page explains some of the differences between traditional and newLISP:



Times change, the way programs are written changed and the way we judge programming languages is changing too. Perl, Python and JavaScript started this 'new' thinking and newLISP translates it into LISP.



Lutz

statik

#14
Proof that people will find anything and everything to complain about.
-statik