Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Qrczak

#1
newLISP in the real world /
November 02, 2004, 02:48:19 AM
Quote from: "Lutz"'let*' and 'letrec' are the same thing, they are just differently named in different LISPs.

Scheme has both let* and letrec, and they are not the same thing.



let* allows each expression to refer to variables defined above it. It guarantees that expressions will be evaluated in order. Common Lisp has the same let*.



letrec allows each expression to refer to any variable in the group, but all references must be made inside function closures - it must be possible to evaluate all expressions to values before binding any variable. Expressions may be evaluated in arbitrary order.
#2
newLISP newS /
October 21, 2004, 12:00:23 PM
Quote from: "newdep"I believe you are using telepathy to transport your words to your keyboard?



An expression like "and uses ideas abandoned years ago. "

is not very smart...

So, is it possible to implement the List Processing shootout entry correctly and efficiently in newLisp, so the comparison can be fair? I'm not going to leave it this way.
#3
newLISP newS /
October 21, 2004, 03:13:24 AM
Actually I started criticism because of the name "newLisp". If it was some random language, I would not notice, but it tries to paint itself as a successor of Lisp, while in my eyes it does not qualify as a Lisp at all, and uses ideas abandoned years ago.



Anyway, the problem with unfair benchmark is objective. I'm assuming it was a mistake. Could somebody please fix the list processing entry to actually conform to the spec, and add/remove elements at the right end too where appropriate? The list doesn't have to be represented by a builtin list type (and in fast several languages use a custom implementation because e.g. their primary list type is immutable). I'm curious whether my supposition about newLisp's inability to solve this in O(N) time is true.
#4
newLISP newS /
October 19, 2004, 04:06:59 PM
Quote from: "Lutz"Who cares about 'cons' cells if you can solve programming problems without it. One concept less to learn, when learning the language.



Who cares about 'garbage collection' if you can to automatic memory management a different more efficient way.

What is the largest application written in newLisp? A wiki with 1333 lines of code?



Garbage collection doesn't matter if you can keep the whole program state in a few dozens of global variables. Data structures don't matter if the program only works with numbers, strings, and lists of strings. Lack of object references doesn't matter if all data the program operates on are so simple.



Does it really use only global variables?! Argh! Yes, for very small programs it's often sufficient. But this style doesn't scale. Perl used to be like this 10 years ago, and this fortunately changed with Perl5.



But programming problems don't consist only of toy problems which can be solved in a thousand of lines.


Quote from: "Lutz"For (2) goto http://newlisp.org/benchmarks/">http://newlisp.org/benchmarks/ for speed comparisons or go to http://shootout.alioth.debian.org/">http://shootout.alioth.debian.org/ and select the 'scorecard' tab then recalculate setting the "Memory scrore multiplier" to 1. You will see that newLISP comes out on top before most other languages, including some compiled ones.

Well, if I don't set the memory multiplier to 1, newLisp comes slower than Java, Python, OCaml bytecode and Perl. And it uses little memory because it provides few constructs to the programmer, which doesn't matter for toy benchmarks.



For "List Processing" newLisp cheats, it does not implement the spec: it always removes and prepends the element at the beginning instead of at the end where the spec requires. Look at the Bigloo implementation to see what was intended.



(Never mind that I don't like most Shootout benchmarks at all. They are too small; they too often require "the same way" instead of "the same thing", which yields unnatural code for many languages.)



And I think in newLisp it's impossible to implement it in O(N) complexity. It will need O(N^2) for all non-trivial recursive list processing functions. Except those which only change a global list at the beginning, without ever passing it as an argument, and those which are predefined; luckily only such functions were required in the wrong implementation of this task.



How well it will perform if it implemented the shootout honestly? Could I see only the "List Processing" benchmark fixed?



Other "benchmarks", even if implemented honestly, often don't tell the whole truth either. For example matrix multiplication. It would seem that newLisp is well suited for matrix processing. Never mind that you can't add two matrices in O(N^2), because unlike matrix multiplication it's not a builtin, and implementing it in newLisp would probably need O(N^3) time. Never mind that the whole matrix is physically copied when passing as a function argument - but a toy benchmark operates on a single global matrix and will not reflect that.
#5
newLISP newS /
October 19, 2004, 02:55:41 PM
I'm indeed more a theoretician than a practician, although I aim at creating a practical product this time. It is 10 times younger than newLisp, but already works well and hosts its own compiler.



Regarding newLisp, I'm afraid it is (from my point of view) broken beyond hope of repair. It could be improved in minor ways, e.g. by distinguishing foreign objects from integers and allowing to attach finalizers to them, but it's too little, to late.



Perhaps "a sane language" is incompatible with "tiny runtime". newLisp is a bit like a syntactic sugar over C: it inherits C problems of lack of safety against segfaults, poor type system which does't generally distinguish values with similar representation, and providing only features which easily map into hardware. It lacks garbage collection, function closures, exceptions, threads, continuations, generic functions, bignums (or even checked overflow on fixnums), rationals, sensible hash tables (contexts with explicitly managed lifetime and only strings as keys don't count), sensible objects with fields (not bound to global variables).



The main criterion for having or not having a feature seems to be "is it trivial to implement in C?". Many functions are thin wrappers around C library or system calls, inheriting even the style of error handling (remember to check for nil as result, obtain a cryptic error code of last operation which failed with a separate function call). "Tiny runtime" is good in general, but here it is at the cost of everything else.



Syntax from Lisp, object model from Tcl, error handling from Perl, safety from C - great combination...
#6
newLISP newS /
October 19, 2004, 01:37:10 PM
Quote from: "newdep"As I say......"A programming language is a mirror of the creator"...



The source code is included...bring me the ultimate or bring me Kogut!

I'm not sure if I understand you. Anyway, I'm afraid this forum is not appropriate for talking about other languages. I am happy to discuss Kogut, but it's off-topic here.



I only wanted to disagree with the claim that 32 bits should be enough for everybody, even though I agree that they are enough in 99% of cases. But the programmer should not have to stand on its head and interface to low-level C libraries in order to obtain correct results from basic arithmetic on large integers.



BTW, interfacing to GMP from newLisp would be very hard, unless I misunderstood how interfacing to C is generally done in newLisp. Besides the memory management issue, and besides the problem that newLisp wants to represent every foreign object as an amorphic integer corresponding to its physical address in memory: many GMP functions are in fact inline functions or macros, so you can't dlsym() them; and their internal names changed once during GMP development (the official names are actually #defined aliases).



It's also suboptimal to perform all computation on bignums. It's better to change to bignums automatically when the result no longer fits in machine word; in other words to allocate the type (fixnum or bignum) dynamically, not statically. You can't even implement custom operators which do it, without wrapping bignums somehow, because they are indistinguishable from seemingly unrelated small integers (their addresses)!



Bignums are just one small issue among many. The whole object representation thing, that it copies everything by value, and that it's the object which in itself contains a link to the next element of the (only) list it is on, is much more serious. And deeply hardwired in newLisp's paradigm, so unlikely to change. Tcl is the only other language I know which forbids sharing of objects. For me it's a braindead idea. It leads to emulating object references by opaque identifiers which are implicitly associated with globally visible data with explicit memory management (names of global variables in Tcl, contexts or physical addresses of C objects in newLisp). This cancels the claim of automatic memory management.



It should not be called newLisp if it does not have cons cells. It's not a Lisp, it only resembles it syntactically. And I consider syntax the worst aspect of Lisp; yes, I know about macros.



BTW, the 10GB file was hypothetical, but I really have a 21GB file here: a backup of my home directory, kept on another drive (as .tar.bz2). And I'm currently downloading an MPEG of 2,114,682,880 bytes (50 hours with my Internet connection); this fits in 31 bits, but is very close to overflow.



BTW^2, an unrelated bug: "newlisp -e" (without further parameter) segfaults.
#7
newLISP newS /
October 19, 2004, 11:34:24 AM
Quote from: "Lutz"If you need bignum handling so much, why don't you create a 'C' library for newLISP with bignum.lsp interface file to import it and use it.

It will not be returned by file-info, nor accepted by print, <, + etc. They are not extensible for user types; they have hardwired knowledge about builtin types and that's all.



Actually I don't see how it can work at all, because copyCell() can't be extended either. Do you mean that the programmer would have to free bignums explicitly?! If yes, then "automatic memory management" from the home page is misleading.



Anyway, I will not write it because I don't use newLisp, because of too many silly design decisions (like lack of garbage collection, implicit copying of large objects by value, dynamic scoping - context are no substitute for lexical scoping because they have to be managed explicitly, lack of user-defined types, and silly semantics of various functions).


Quote from: "Lutz"There is also the GMP http://www.swox.com/gmp/">http://www.swox.com/gmp/

Yeah, it's good. I used it in the implementation of my language Kogut.
#8
newLISP newS /
October 18, 2004, 01:49:46 PM
Quote from: "nigelbrown"Automatically throwing a system into bignum mode seems to be giving a false sense of security to those who haven't thought about the numerical side of their programming.

Oh, really? I have a file of size of 10GB. newLisp's file-info function returns nil for it. Could this be fixed? For example Python has no problem with delivering the correct result.



So you are saying that letting it "just work" in the language would be bad, and it's the fault of the script writer who hasn't thought that files may be that big. Ok, let's assume that he is aware of that. What can he do?
#9
newLISP newS /
October 01, 2004, 12:32:31 PM
I'm not complaining that the behavior is just different than in Scheme or Common Lisp. I'm complaining that it repeats past mistakes of old Lisps and old Perls which were later fixed, and that it's simply not useful in several cases.


Quote from: "Lutz"(3) Dynamic scoping

Is not a 'bad idea' per se, it is just a different way of doing variable scoping. Millions of Perl programmers feel just fine using dynamic scoping, so do newLISP users.


Dynamic scoping went out of fashion in Perl too, except in rare cases. It's not bad to be available; it is a bad default. Here is what 'man perlsub' says:



       WARNING: In general, you should be using "my" instead of

       "local", because it's faster and safer.  Exceptions to

       this include the global punctuation variables, global

       filehandles and formats, and direct manipulation of the

       Perl symbol table itself.  "local" is mostly used when the

       current value of a variable must be visible to called sub­

       routines.



"local" is dynamic scoping, "my" is lexical scoping; "my" is available for 10 years.



Later, in the same manual:



       Passing Symbol Table Entries (typeglobs)



       WARNING: The mechanism described in this section was orig­

       inally the only way to simulate pass-by-reference in older

       versions of Perl.  While it still works fine in modern

       versions, the new reference mechanism is generally easier

       to work with.  See below.



       Sometimes you don't want to pass the value of an array to

       a subroutine but rather the name of it, so that the sub­

       routine can modify the global copy of it rather than work­

       ing with a local copy.  In perl you can refer to all

       objects of a particular name by prefixing the name with a

       star: *foo.  This is often known as a "typeglob", because

       the star on the front can be thought of as a wildcard

       match for all the funny prefix characters on variables and

       subroutines and such.



See how it resembles newLisp's implementation of hash tables and OO-style objects? But in Perl it has been obsolete for 10 years.
#10
newLISP newS /
October 01, 2004, 08:23:55 AM
Quote from: "pjot"Icon passes by value.

Not true:



procedure change(l)

   l[1] := 10

end



procedure main(args)

   local l

   l := [1,2,3]

   change(l)

   write(l[1])

end



Result: 10


Quote from: "pjot"Furthermore, KSH, BASH, (...) ZSH

I said that Unix shells are exceptions. They support no data structures except strings (and sometimes arrays). They aren't suitable for large programs because of this.


Quote from: "pjot"Logo,

I programmed in Logo 15 years ago. I don't remember whether it supported mutable data at all, but BUTFIRST definitely has time complexity of O(1) - it's a descendant of Lisp after all.


Quote from: "pjot"Perl,

Not true:



sub change($) {++$_[0]}

my $x = 5;

change($x);

print "$xn";



Result: 6



...and similarly when references are used; they are always used with nested data structures.


Quote from: "pjot"almost all BASIC dialects,

I haven't used a BASIC which had procedures, where this question would me meaningful, but I'm sure that Visual Basic has object references and doesn't make implicit copies of arrays.


Quote from: "pjot"PHP,

Not true. I don't have PHP installed here, but I'm sure that you can translate the above Icon sample and obtain the same result.


Quote from: "pjot"Prolog

Not true. When a predicate unifies a type variable with some value, the effect is visible outside the predicate. Unification is the only side effect.


Quote from: "pjot"Rebol,

Not true:



>> test: func [x] [x/foo: 4]

>> obj: make object! [foo: 2]

>> obj/foo

== 2

>> test obj

== 4

>> obj/foo

== 4


Quote from: "pjot"even CLisp.

Of course not, because it's a Common Lisp implementation:



[1]> (defun test (x) (rplaca x 44))

TEST

[2]> (defvar obj (list 2))

OBJ

[3]> (test obj)

(44)

[4]> obj

(44)


Quote from: "pjot"UINT is defined differently on different platforms, depending on your C-compiler.

I can see "#define UINT unsigned int" in newlisp.h, not surrounded by any #ifdefs.
#11
newLISP newS /
October 01, 2004, 06:59:16 AM
Quote from: "pjot"In fact, only C-pointers pass objects by reference and only Java/C++ know a kind of garbage collecting. Almost all interpreted languages pass 'objects' by value.


Which scripting languages? Python, Ruby, Icon pass objects by reference. Perl passes them by reference when calling functions and makes a copy when storing in a data structure (but it has explicit references). Only Tcl passes them by value (AFAIK) and Unix shell which doesn't have any other type than string (and sometimes array).



Besides scripting languages, all funtional languages (e.g. SML, OCaml, Haskell, Clean, Erlang), all Lisp and Scheme dialects including Dylan, also Prolog, Smalltalk, Self, Java - pass objects by reference. For atomic immutable values it doesn't matter how it's passed, the behavior is the same regarding mutation in the presence of aliasing, so Java counts too. They don't make deep copies of structured data, they allow to refer to a part of a graph of objects and mutate it such that the mutation is visible in the original. They all allow to actually create a graph of objects which newLisp doesn't (except by emulating pointers by global variables with unique names, like in Tcl).



If newLisp wants to promote functional programming which doesn't use mutation in place, then fine, but cons and rest would have to be O(1).



Some languages pass some objects by reference, and others (only small objects with bounded size) by value. This includes C, Pascal, Eiffel and C#.



C++ and Tcl are the only languages I know which make implicit copies of whole nested structures. And even in C++ you can use pointers to avoid this.



Regarding GC, almost all languages, especially those designed recently, free objects automatically. Only C, Pascal, C++ and some ancient languages don't have native GC (sometimes it's painfully implemented as a library for C and C++). And some limited languages with no data structures at all, like Unix shells.



Languages with no GC and no pointers are not fun.


Quote from: "pjot""get-char: Gets a character from an address specified in int-address." - An address represented in 32 bits? So it will not work on 64-bit machines. What a pity."



Why don't you read better. If you compile newLisp on a 64-bit architecture, guess what happens?


The pointer is cast to UINT, which is a typedef for unsigned int, which is half of the size of a pointer.


Quote from: "HPW"I do not know why the lisp-commintiy has such a problem with newLISP?


It's just very badly designed, and the name suggests that the author should have known better.



(I'm not a member of Lisp community.)
#12
newLISP newS / This is ridiculous
October 01, 2004, 03:54:57 AM
The name "newLISP" would suggest that it builds on old Lisp experience, but in fact it ignores Lisp history and uses rules which have been tried and rejected as bad ideas, like dynamic scoping by default.



"Excepting symbols and built-in primitives, each object is referenced only once in the system and is always passed by value." - This implies that a recursive function which conses an element on each iteration to a growing list, or even descends a list by iterating 'rest', has O(N^2) complexity, or worse if elements themselves are big.



This also makes impractical to create graphs of mutable objects: you can't refer to a part of it for further modification because it will be implicitly copied. IMHO this very fact makes it not a Lisp at all. Every Lisp, and in fact most modern languages, pass objects by reference and collect unreachable garbage. In every Lisp you can look at the rest/cdr/tail of a list without copying it.



"Many of newLISP's built-in functions are polymorphic in type, accepting a variety - and sometimes all - data types and thereby greatly reducing the total number of different functions and syntactic forms necessary to be learned and implemented." - Except arithmetic, where it would make the most sense. Not just the defaults are limited, but there are no arithmetic operators which do the right thing (return an exact answer when possible).



No wonder why correctly behaving arithmetic is not provided: having only fixed-range integers, without even checking for overflow, and floats, is pathetic. This set of types just can't exactly represent 1/10 nor the size of my hard drive in bytes. Is it Lisp or C?



cons which does a completely different thing depending on whether the second argument is a list is stupid. If there are improper lists, then fine - make it an error if it's not a list. But making a two-element list *except* when the second argument was a list makes absolutely no sense, because if you rely on cons making a two-element list, you will badly surprised if the second argument happens to be a list. There is no context where you really want either this or that, in all cases you statically know which behavior you want, so it makes no sense to put them both in one function. I bet that the two-element behavior of cons is not used in practice, because it's not robust.



A hash table is associated with a global variable. What?! I can't make a list of hash tables without inventing unique names for them? This applies to objects as well. Is it Lisp or Tcl?



The implementation of hash tables is interesting:

> (load "modules/hash.lsp")

true

> (HASH:make 'h)

h

> (h:put "(" 'open)

open

> (h:get ")")

open

...Huh?!



"Unless otherwise specified for a function, an index greater than N-1 returns the last element, and an index less than -N returns the first when in lists, but out of bounds indices in arrays will cause an error message." - long live the consistency. I see no purpose in returning the last element of a list when the index is too large - it will just mask programming errors.



"constant: Works exactly like set but protects the symbol from subsequent modification. A symbol set with constant can only be modified again using constant." - Funny, a constant which is not constant.



"If num is supplied, dec always returns the result as floating point even for integer input arguments." - Why?



"If the expression in exp evaluates to a string, it will be replicated int-n times in a string and returned. For any other type in exp a list of int-n of the evaluation of exp is returned." - How to make a list with N references to the given string (sorry, N copies of the string)? Again an inconsistent behavior which doesn't make sense - if you want to make a list using dup, ensure that the element is *not* a string, because you will get a different thing in this case.



A similar problem is inherent in flat. The documentation says: "To convert a list back to an array apply flat to the list". It will break if the array is supposed to contain lists as some elements. These operations which do a particular thing for almost all values, *except* when values are of a particular type, are just misdesigned.



"get-char: Gets a character from an address specified in int-address." - An address represented in 32 bits? So it will not work on 64-bit machines. What a pity. C used to almost unify pointers with integers about 30 years ago.



File handles represented as ints and I/O operations using system calls directly, with no buffering? I knew that this is C 30 years ago.



In general, I'm disgusted. There is some work in wrapping C library functions, but the core language design, number and list processing is awful. The name newLisp is misleading: it's not a Lisp and it's not modern. It tries hard to have neither garbage collection nor pointers, which implies that pointers must be simulated using unique constants which denote globally accessible structures, as if it was Tcl or ancient Fortran. It throws away the progress made in programming language design in the last 30 years.