newLISP Fan Club

Forum => newLISP newS => Topic started by: HPW on September 20, 2004, 06:52:40 AM

Title: newLISP now on Graham's list
Post by: HPW on September 20, 2004, 06:52:40 AM
newLISP has now it's place on Paul Graham list of Lisps:



http://www.paulgraham.com/lisps.html



(He finaly read my mail from some weeks ago)



;-)
Title:
Post by: HPW on September 20, 2004, 07:04:08 AM
And we got more noise here:



http://lemonodor.com/



and here:



http://lambda-the-ultimate.org/



http://lambda-the-ultimate.org/node/view/257#comment
Title:
Post by: Lutz on September 20, 2004, 07:44:34 AM
Traffic to newlisp.org was up almost 10 times after the lambda-the-ultimate.org posting. Most of it's members are CL ans Scheme users, some of them seem to be open to new ideas and doing things in a different way in LISP.



Downloads on newlisp.org have exploded too and I also have put links to Sourceforge on newlisp.org to divert some of the download raffic. We have  6 new members here on the board in just one week. I guess that Ryon, who hosts this board, has also seen substantial traffic increase.



Tomorrow newLISP 8.2.0 comes out, the official release to introduce threads (was first in 8.1.6).



This and the coming weeks many colleges start their fall semesters and that always brings new users too.



Lutz



ps: thanks for asking Paul Graham to include newLISP on his list. He is someone proposing changes in LISP and newLISP is a LISP trying to do just that.
Title:
Post by: newdep on September 20, 2004, 11:31:44 PM
:-) Nice nice... ;-)
Title:
Post by: pjot on September 21, 2004, 08:39:05 AM
Well, I also saw the hits on my site increasing... ;-) they almost all came from the newLisp "libraries" page, sometimes more than 10 per day. I already wondered what caused this, but now I know. A good thing newLisp gets more publicity!
Title:
Post by: Lutz on September 21, 2004, 12:09:17 PM
Norman, even the sales of Cornflakes went up :)



Lutz
Title:
Post by: newdep on September 21, 2004, 12:16:50 PM
hahahaha...



well overhere they did for sure.. even my snippets page

has 10 hits a day more now... It boosts ;-)...
Title:
Post by: HPW on September 23, 2004, 06:35:31 AM
There are more rumors on comp.lang.lisp these days:



lin8080 (lin8080@freenet.de)

wrote:
Quote
Well, I love it.



Since version 2.0 I know this nice softpile. It was my first lisp

experience on dos5.0-386/33er box. The turn around version 6.3 (the last

version before win98 gets old) runs also as one *.exe with wine under

linux. The file is only 463kb ... and can move the smile.bmp around the

screen. :)



There is more to say, but test it. The code is Scheme oriented and there

are not very much specials, but it shows, what 'portable can be. You can

look at nuvatec.com, maybe they had the older versions still online.



Worth learning it? Hmmm... when you are tired with all that high level

stuff you will have lots of fun with it. Pscht, this is a secret hint:

there is not much to learn, just copy and paste and modify from the

docus and you can cross high mountains. The designers seem to know how

to spell intuition.



Something other strange I know is plisp, but don't know whether it is

supported.



stefan




Raffael Cavallaro (raffaelcavallaro@junk.mail.me.not.mac.com)

wrote:
Quote
> Well, I love it.



I find it amusing 'cause it's good for WTF! style yucks. I would never

trust it for serious work.



For example:



raffaelc$ newlisp

newLISP v.8.1.7 Copyright (c) 2004 Lutz Mueller. All rights reserved.



> (defun factorial (x) (if (< x 2) 1 (* x (factorial (- x 1)))))

(lambda (x)

 (if (< x 2)

  1

  (* x (factorial (- x 1)))))

> (factorial 5) 120

> (dotimes (n 50) (begin (print (factorial n)) (print "n")))

1

1

2

6

24

120

720

5040

40320

362880

3628800

39916800

479001600

1932053504

1278945280

2004310016

2004189184

-288522240

-898433024

109641728

-2102132736

-1195114496

-522715136

862453760

-775946240

2076180480

-1853882368

1484783616

-1375731712

-1241513984

1409286144

738197504

-2147483648

-2147483648

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

"n"

>



Hmmm...

So I consulted the manual:



"Calculations which result in values bigger than 2,147,483,647 or

smaller than |2,147,483,648 wraps around from positive to negative or

negative to positive numbers."



I fail to see how this is a step forward for lisp. Though I think it is

appropriately named - sort of like Orwell's newspeak...


Some love it and the other's .....
Title:
Post by: Lutz on September 23, 2004, 09:15:26 AM
He could have just defined it using floating point arithmetik:





(define (factorial x) (if (< x 2) 1 (mul x (factorial (sub x 1)))))



and then it will work just fine.



If he dosn't like the possibility of Integer Arithmetik in a scripting language, he can do:



(constant '+ add)

etc



Lutz



ps: can you forward that to him? Where did this appear?
Title:
Post by: HPW on September 23, 2004, 11:03:21 AM
>Where did this appear?



http://groups.google.com/groups?group=comp.lang.lisp



>ps: can you forward that to him? Where did this appear?



I have posted your answer to comp.lang.lisp.

Need some hours to appear there.
Title:
Post by: mrd on September 23, 2004, 12:53:31 PM
Quote from: "Lutz"He could have just defined it using floating point arithmetik:


Lutz, please be a little more sensible.  Do not abuse floating point arithmetic like this, it is causing us all to laugh at you.  Floating point arithmetic is for INEXACT CALCULATIONS ONLY.  Using floating point representation as a stand-in for real numbers is a classic newbie mistake.  Floating point has its uses, but none of them include representing arbitrary precision exact integers.   The correct solution is to implement proper bignum support just like every other Lisp.  While you're at it, do polymorphic arithmetic operations.  Having +, -, *, and / only work on machine-sized integers (and no rational type) is lame.





Wikipedia entry on Floating Point Arithmetic (//http)



What Every Computer Scientist Should Know About Floating Point Arithmetic (//http)
Title:
Post by: Excalibor on September 23, 2004, 03:15:36 PM
Quote from: "mrd"
(...) Floating point has its uses, but none of them include representing arbitrary precision exact integers.   The correct solution is to implement proper bignum support just like every other Lisp.  While you're at it, do polymorphic arithmetic operations.  Having +, -, *, and / only work on machine-sized integers (and no rational type) is lame.


While having the complete Scheme's or Common Lisp number tower would be very nice, in practical terms many languages implement BigNum arithmetics via libraries, which take the complexity out of the core of the language/interpreter and move it into user space, where it belongs...



I agree that differentiation between integers and floats in a dynamicallly typed language may be a premature optimization, this is not an interpreter designed for number crunching, thus having BigNums as a library should be more than okay...



Same goes to your comments regarding +, -, etc... newLISP is different to other  Lisps, constructive proposals to make it better than it is are welcome by its COmmunity and I'm sure by the creator as well, but there are ways of doing so, for example by offering code... even polite code would serve...



regards,

david
Title:
Post by: Lutz on September 23, 2004, 05:08:59 PM
I agree with mrd (thanks for the links and welcome to the discussion board) that floating points are for INEXACT CALCULATIONS ONLY, which is one of the reasons newLISP still maintains integer arithmetik (although only 32 bits). I have looked into arbitrary size integer arithmetik, but felt that it is too much overhead to implement in a small scripting language like newLISP.



Pretty much all scripting applications do just fine with 32-bit integers and it is nice to stick to integer arithemtik in some areas of applications.



Moving newLISP to 64bit integers when 64 bit computing becomes mainstream will be easy. That still would not be bignum but a step forward. (Note that the current 'linux64' compile flavor still produces a 32bit newLISP, but running on a 64bit CPU).



People who do bignum application perhaps shouldn't use a scripting language like newLISP. Those bignum applications would probably involve heavy number-crunching which is more suited to do in a compiled language.



Perhaps a nice compromise would be to have bignum library written in 'C' and import it into newLISP which is more or less what David is suggesting. This library then could have other math stuff too, used by people dealing with bignums.



Lutz
Title:
Post by: HPW on September 23, 2004, 11:10:12 PM
The echo:



Mario S. Mommer (m_mommer@yahoo.com)

Wrote:
Quote
lisp.3.hpw@spamgourmet.com (HPW) writes:

> Reposted from newlisp forum.



Is this for real or are you making this up?



> He could have just defined it using floating point arithmetik:

>

>

> (define (factorial x) (if (< x 2) 1 (mul x (factorial (sub x 1)))))

>

> and then it will work just fine.

>

> If he dosn't like the possibility of Integer Arithmetik in a scripting

> language, he can do:

>

> (constant '+ add)

> etc

>

> Lutz



This is so absurd it is (almost) no longer funny.



Think bignum, man!


So lets stop lisp-language wars and go scripting.

;-)
Title:
Post by: Lutz on September 24, 2004, 05:43:59 AM
New users of newLISP coming from Common Lisp or other Lisps, will naturally try to understand newLISP, using concepts they have learned in Common Lisp, and they try to define newLISP in Common Lisp terms.



I believe that newLISP's concepts are easy enough to be understood on it's own terms.



Lutz
Title:
Post by: mrd on September 24, 2004, 01:00:17 PM
Quote from: "Lutz"
People who do bignum application perhaps shouldn't use a scripting language like newLISP. Those bignum applications would probably involve heavy number-crunching which is more suited to do in a compiled language.


The best CL for bignums is actually CLISP, surprisingly.  It is a byte-code interpreter, but it uses extremely fast algorithms.  CMUCL does better with FP, though.  But CLISP has arbitrary-precision floats, which is unusual (you have to toggle something to get them though).


Quote from: "Lutz"
Perhaps a nice compromise would be to have bignum library written in 'C' and import it into newLISP which is more or less what David is suggesting. This library then could have other math stuff too, used by people dealing with bignums.


Then it won't automatically switch between fixnums and bignums.  This is extremely convenient.  Most other languages are moving towards this feature, not away.
Title:
Post by: Lutz on September 24, 2004, 03:13:26 PM
I will put bignums on my todo list, but not for the near future. The code-size impact would probably not be big, but I have to explore if it is possible to do it without impacting current functionality speed performance wise. I agree that a fully transparent implementation would be the best, so integer arithmetik would always be bignum if it has too.



Lutz
Title:
Post by: HPW on September 27, 2004, 11:32:42 PM
Still more laughing:


Quote
Von:Raffael Cavallaro (raffaelcavallaro@junk.mail.me.not.mac.com)

Betrifft:Re: Strange LISP: [ http://www.newlisp.org ]

 

View this article only

Newsgroups:comp.lang.lisp

Datum:2004-09-25 20:51:29 PST



In article <41532A5B.BA6A7707@freenet.de>, lin8080 <lin8080@freenet.de>

wrote:



> Didn't I type ... lots of fun?

>

> stefan



Well, to explain a joke is always to kill it (not that this one had much

life left in it anyway...)



You wrote, "Well, I love it," and "NewLisp is funny and givs me back a

special kind of lispy element." The two, in combination suggest that you

find it funny in the sense of liking and enjoying it.



I wrote "I find it amusing 'cause it's good for WTF! style yucks," which

suggests that I find it funny in the sense of laughing at it because it

is so grotesquely bad. Given the legions of high quality common lisp and

scheme implementations out there, not to mention Dylan and goo, newlisp

is a giant step backwards for the lisp family of languages. If I could,

I would euthanize newlisp for the good of programmers the world over.



Summary: You're laughing *with* newlisp. Most of us are lauging *at* it.



Quote
Von:Matthew Danish (mrd+nospam@cmu.edu)

Betrifft:Re: Strange LISP: [ http://www.newlisp.org ]

 

View this article only

Newsgroups:comp.lang.lisp

Datum:2004-09-27 14:30:06 PST



Mike <mikee@mikee.ath.cx> writes:

> I'm new to LISP. Please explain why newLISP is a farse? For what I'm

> considering it seems to do everything that I want.



A Turing machine can do everything you want, too.  That doesn't mean

you want to program that way.



``newLISP'' makes design decisions which Lisp programmers out-grew

long ago.  I'm not talking about availability of libraries here, I'm

talking about fundamental concepts in the language.  ``newLISP'' does

not have proper lexically scoped variables, it does not have

polymorphic arithmetic functions with a numeric tower, it has an

extremely primitive, and in some cases downright poor, treatment of

objects.  In short, ``newLISP'' is an anachronism; it belongs to the

1960s, before most people started to get a clue about language design.



Of course, one can point out flaws in any language; but what is

particularly ironic about ``newLISP'' is that despite its name, it

does not indicate that there was any knowledge of current Lisps in the

minds of the authors.  How could they create a truly ``new'' Lisp if

they don't know anything about existing Lisps?



--

;; Matthew Danish -- user: mrd domain: cmu.edu

;; OpenPGP public key: C24B6010 on keyring.debian.org


Keep care that they do not euthanize us!

;-)
Title: No one kicks a dead dog
Post by: jsmall on September 28, 2004, 05:05:50 AM
If newLisp is such a dog and dead (in the grave long ago)

why kick it?



Several things the detractors miss about newLisp.



1. It's small foot print and convenience.  Perhaps they

didn't miss this at all.  Perhaps it is admitting "common folk"

to the exclusive Lisp country club that gets their goat

as if we are going to polute the language space.  The

open market of ideas will insure the drift wood is carried

out to sea and forgotten.



2. It goes against the Ionian and Pathagorean rationalists

concept of the pure and beautiful, anything practical or

empirical is pedestrian and to be rejected out of hand

including heaven forbid those nasty irrational numbers

like the square root of 2.



3. Lutz is taking an entirely new approach to programming

in Lisp - something I call VO (Value Oriented) in which

he transforms "shapes" into new shapes.  A program

becomes literally a sequence of shape transform layers.

This is done with primitives like "ref" and "index" and I

think the detractors totally miss this.  This mindset is

what produced the smaller libraries etc.  This is what is

"new" in newLisp.  This is what gets their goat the most,

i.e. calling it new which in turn is calling them old. I

remember my first encounter with Lisp 20 years ago.

I discounted the language as antiquated with all those

parentheses and lack of strong typing.  After all I was

a sophistocated strong type checking Pascaler.  That

quick jump to judgment made me waste the best

years of my programming life. It wasn't until about

5 years ago I started my journey through the functional

languages and realized that Lisp had something to offer.

Maybe if these folk look past their distain for dynamic

scoping they can see what is new in newLisp and not

have to wait until the revolution has passed them by.

And as far as the comments go about documentation

well I'd rather Lutz keeps pushing the envelop then

become too distracted with documentation perfection.

Those bring up the rear can worry about that.



Damn the torpedos - full speed ahead!
Title:
Post by: Lutz on September 28, 2004, 12:59:26 PM
Thanks John and Hans-Peter for defending newLISP. Most on this forum know what newLISP is about and are not worried about the kind of posts you sometimes see on comp.lang.lisp.



Users on this forum use newLISP to make programs, solving real world problems. We are not interested in having a language where every concept known in Computer Science is implemented.



That does not mean that these problems are not interesting, but they are not the primary focus of newLISP. From this point of view the simplicity of newLISP is a virtue not a disadvantage.



Many users on this forums have used standard Common Lisp or Scheme but have decided to use newLISP because they want to concentrate on the programming problem at hand and not on the theory of certain language features.



newLISP's simplified and streamlined LISP model make it easier to learn and understand and make it easier to understand code written in it.



I recommend going to http://shootout.alioth.debian.org/ and compare different LISPs and other languages, how they solve certain computing problems and how they use computing resources like CPU cycles or memory. A copy of all the speed benchmarks for Perl, Python, Guile and newLISP is on the newlisp.org site at: http://newlisp.org/benchmarks/ .



Lutz
Title:
Post by: BrickCaster on October 16, 2004, 05:36:32 PM
i think newLISP is good because it is an alternative.



* can you live without closures ?

* can you live without classes (and use prototypes instead) ?

* can you live without continuations ?



if the answer is "yes" then newLISP is a serious candidate.

otherwise one may be happier with Scheme.



anyway, i don't see why newLISP should be bashed.

someone who devotes time and efforts should always be praised.
Title:
Post by: Lutz on October 17, 2004, 06:09:00 AM
* can you live without closures ?



A context in newLISP is a lexical closure



* can you live without classes (and use prototypes instead) ?



Absolutely, and so do many others in 'Python' or in research languages like 'Self'. In fact it seems that newer languages seem to prefer the prototype based approach and see it as the better soution of OO programming.



* can you live without continuations ?



newLISP has everything to program a type of continuation, if you want to, i.e. 'catch' and 'throw'



But those three questions are all not relevant, they are the wrong questions. The real question is:



Can newLISP be used to write your program or application or script? Most users on this board proove they can, and they produce shorter and faster solutions than with other tools.



Just let your mind come out of the box, fire up newLISP and do some creative work, solve real problems.



Lutz
Title:
Post by: Jeremy Dunn on October 17, 2004, 02:57:10 PM
I find the complaint about the factorial function a little ridiculous. For most normal situations it is probably more efficient to have the function look up a precalculated value from a table rather than brute force it every time. How high do we need to go, factorial 200? Use Sterling's formula for an approximate value or code your problem in Mathematica if you are trying to some weird number theory thing. Yes it would be fun to have bignums integrated but give Lutz some time, Perl didn't start out perfect either.
Title:
Post by: nigelbrown on October 17, 2004, 05:03:19 PM
Automatically throwing a system into bignum mode seems to be giving a false sense of security to those who haven't thought about the numerical side of their programming. It can be of value in number theory etc to have bignum but a more thorough approach to planning your calculations is probably a better step most times.

Nigel
Title:
Post by: Qrczak on October 18, 2004, 01:49:46 PM
Quote from: "nigelbrown"Automatically throwing a system into bignum mode seems to be giving a false sense of security to those who haven't thought about the numerical side of their programming.

Oh, really? I have a file of size of 10GB. newLisp's file-info function returns nil for it. Could this be fixed? For example Python has no problem with delivering the correct result.



So you are saying that letting it "just work" in the language would be bad, and it's the fault of the script writer who hasn't thought that files may be that big. Ok, let's assume that he is aware of that. What can he do?
Title:
Post by: Lutz on October 18, 2004, 02:06:45 PM
10 Gbyte file, that is pretty uncommon. That file is probably a database and gets accessed with a database manager or other tools.



A 10 Gbyte file is an excotic requirement not worth discussing until it gets mainstream and then we will have 64Bit computers on the desktop and a 64bit newLISP, but the time simply hasn't come.



The requirement of newLISP to handle bigger than 32bit integer size files has *never* come up or asked for by *anybody*. You are the first one talking about this. Even the C libraries on current 32 bit computers treat these with special functions like fopen64() etc.. Why would I burden a language with this kind of stuff so one in 10,000 programmers will use it?



Lutz
Title:
Post by: Lutz on October 18, 2004, 04:26:31 PM
If you need bignum handling so much, why don't you create a 'C' library for newLISP with bignum.lsp interface file to import it and use it. The same could be done for open64, seek64 etc..?



This kind of approach is always good for features not needed as much, but it can be delivered in an additional module. There are exampled in the manual and on the newlisp.org site whoe to import and write 'C' libraries for newLISP.



There is also the GMP http://www.swox.com/gmp/ GNU Bignum library, which seems to be very extensive and you could write an interface gmp.lsp for it.



Lutz
Title:
Post by: HPW on October 18, 2004, 10:19:46 PM
>There is also the GMP http://www.swox.com/gmp/ GNU Bignum library, which seems to be very extensive and you could write an interface gmp.lsp for it.



I have not test it, but ffidl comes with demos for GMP.



http://elf.org/ffidl/#demos
Title:
Post by: Qrczak on October 19, 2004, 11:34:24 AM
Quote from: "Lutz"If you need bignum handling so much, why don't you create a 'C' library for newLISP with bignum.lsp interface file to import it and use it.

It will not be returned by file-info, nor accepted by print, <, + etc. They are not extensible for user types; they have hardwired knowledge about builtin types and that's all.



Actually I don't see how it can work at all, because copyCell() can't be extended either. Do you mean that the programmer would have to free bignums explicitly?! If yes, then "automatic memory management" from the home page is misleading.



Anyway, I will not write it because I don't use newLisp, because of too many silly design decisions (like lack of garbage collection, implicit copying of large objects by value, dynamic scoping - context are no substitute for lexical scoping because they have to be managed explicitly, lack of user-defined types, and silly semantics of various functions).


Quote from: "Lutz"There is also the GMP http://www.swox.com/gmp/

Yeah, it's good. I used it in the implementation of my language Kogut.
Title:
Post by: newdep on October 19, 2004, 12:20:58 PM
As I say......"A programming language is a mirror of the creator"...



The source code is included...bring me the ultimate or bring me Kogut!
Title:
Post by: Qrczak on October 19, 2004, 01:37:10 PM
Quote from: "newdep"As I say......"A programming language is a mirror of the creator"...



The source code is included...bring me the ultimate or bring me Kogut!

I'm not sure if I understand you. Anyway, I'm afraid this forum is not appropriate for talking about other languages. I am happy to discuss Kogut, but it's off-topic here.



I only wanted to disagree with the claim that 32 bits should be enough for everybody, even though I agree that they are enough in 99% of cases. But the programmer should not have to stand on its head and interface to low-level C libraries in order to obtain correct results from basic arithmetic on large integers.



BTW, interfacing to GMP from newLisp would be very hard, unless I misunderstood how interfacing to C is generally done in newLisp. Besides the memory management issue, and besides the problem that newLisp wants to represent every foreign object as an amorphic integer corresponding to its physical address in memory: many GMP functions are in fact inline functions or macros, so you can't dlsym() them; and their internal names changed once during GMP development (the official names are actually #defined aliases).



It's also suboptimal to perform all computation on bignums. It's better to change to bignums automatically when the result no longer fits in machine word; in other words to allocate the type (fixnum or bignum) dynamically, not statically. You can't even implement custom operators which do it, without wrapping bignums somehow, because they are indistinguishable from seemingly unrelated small integers (their addresses)!



Bignums are just one small issue among many. The whole object representation thing, that it copies everything by value, and that it's the object which in itself contains a link to the next element of the (only) list it is on, is much more serious. And deeply hardwired in newLisp's paradigm, so unlikely to change. Tcl is the only other language I know which forbids sharing of objects. For me it's a braindead idea. It leads to emulating object references by opaque identifiers which are implicitly associated with globally visible data with explicit memory management (names of global variables in Tcl, contexts or physical addresses of C objects in newLisp). This cancels the claim of automatic memory management.



It should not be called newLisp if it does not have cons cells. It's not a Lisp, it only resembles it syntactically. And I consider syntax the worst aspect of Lisp; yes, I know about macros.



BTW, the 10GB file was hypothetical, but I really have a 21GB file here: a backup of my home directory, kept on another drive (as .tar.bz2). And I'm currently downloading an MPEG of 2,114,682,880 bytes (50 hours with my Internet connection); this fits in 31 bits, but is very close to overflow.



BTW^2, an unrelated bug: "newlisp -e" (without further parameter) segfaults.
Title:
Post by: newdep on October 19, 2004, 01:58:40 PM
Qrczak, you sound like someone how has knowhow about theory of languages

and perhpas practise too...still I dont understand the "complain" you "seem" to

have about newlisp...oke oke nothing is perfect, perhpas the statements should

be changed in the language overview?



Perhpas it is just the fact that you would have liked to see newlisp with

those features your are missing? I think Lutz can use some help on those..



Anyway, i dont like talking about a programming languages much, I use them,

and practise proofs if its effective or not..Still if im missing someting im building

it, if i cant im asking for it., If its impossible i'll wait or switch..



Somehow i think you are more into talking about efficiency in a language,

but if all languages where bullit-proof it would be very borring too?



I dont do much with bignums It dont need it, but would i need it i would switch

to a different language, Like i do from Spanish to Frensh to German to Dutch to English when it fits...



Well i take my finger off this topic, I dropped C already 15 years ago ;-)



Hope you can give newlisp some spirit..or perhpas try i.e. Rebol...:-)



Norman.
Title:
Post by: Lutz on October 19, 2004, 02:32:52 PM
Who cares about 'cons' cells if you can solve programming problems without it. One concept less to learn, when learning the language.



Who cares about 'garbage collection' if you can to automatic memory management a different more efficient way.



A programming language is not validated by its underlying theoretical concepts but by:



(1) ease of use to express a pogramming solution to a problem



(2) execution speed and efficiency of memory usage



Number (1) is partly a question of personal preference for syntax and also depends on the API/function repertoir offered. The LISP syntax has a great aesthetic value to it, which some see, some don't. I is also easy learn and to process by other programs.



For (2) goto http://newlisp.org/benchmarks/ for speed comparisons or go to http://shootout.alioth.debian.org/ and select the 'scorecard' tab then recalculate setting the "Memory scrore multiplier" to 1. You will see that newLISP comes out on top before most other languages, including some compiled ones.



Lutz
Title:
Post by: pjot on October 19, 2004, 02:45:26 PM
I agree with Lutz. I've been programming for 20 years now, and I've used many languages, from assembly to C++ to Prolog. After all these years I am happy to have found this powerfull language newLisp, with which I can do virtually everything on any platform, in a fast and efficient way.



Peter
Title: I can work with s-expressions in newLISP - I'm happy
Post by: jsmall on October 19, 2004, 02:49:33 PM
And I can write (cons 1 '(2 3)) and get (1 2 3).

This looks okay to me.
Title:
Post by: Qrczak on October 19, 2004, 02:55:41 PM
I'm indeed more a theoretician than a practician, although I aim at creating a practical product this time. It is 10 times younger than newLisp, but already works well and hosts its own compiler.



Regarding newLisp, I'm afraid it is (from my point of view) broken beyond hope of repair. It could be improved in minor ways, e.g. by distinguishing foreign objects from integers and allowing to attach finalizers to them, but it's too little, to late.



Perhaps "a sane language" is incompatible with "tiny runtime". newLisp is a bit like a syntactic sugar over C: it inherits C problems of lack of safety against segfaults, poor type system which does't generally distinguish values with similar representation, and providing only features which easily map into hardware. It lacks garbage collection, function closures, exceptions, threads, continuations, generic functions, bignums (or even checked overflow on fixnums), rationals, sensible hash tables (contexts with explicitly managed lifetime and only strings as keys don't count), sensible objects with fields (not bound to global variables).



The main criterion for having or not having a feature seems to be "is it trivial to implement in C?". Many functions are thin wrappers around C library or system calls, inheriting even the style of error handling (remember to check for nil as result, obtain a cryptic error code of last operation which failed with a separate function call). "Tiny runtime" is good in general, but here it is at the cost of everything else.



Syntax from Lisp, object model from Tcl, error handling from Perl, safety from C - great combination...
Title:
Post by: pjot on October 19, 2004, 03:05:39 PM
Quote
Syntax from Lisp, object model from Tcl, error handling from Perl, safety from C - great combination...


It is a great combination, isn't it? :-)
Title:
Post by: Qrczak on October 19, 2004, 04:06:59 PM
Quote from: "Lutz"Who cares about 'cons' cells if you can solve programming problems without it. One concept less to learn, when learning the language.



Who cares about 'garbage collection' if you can to automatic memory management a different more efficient way.

What is the largest application written in newLisp? A wiki with 1333 lines of code?



Garbage collection doesn't matter if you can keep the whole program state in a few dozens of global variables. Data structures don't matter if the program only works with numbers, strings, and lists of strings. Lack of object references doesn't matter if all data the program operates on are so simple.



Does it really use only global variables?! Argh! Yes, for very small programs it's often sufficient. But this style doesn't scale. Perl used to be like this 10 years ago, and this fortunately changed with Perl5.



But programming problems don't consist only of toy problems which can be solved in a thousand of lines.


Quote from: "Lutz"For (2) goto http://newlisp.org/benchmarks/ for speed comparisons or go to http://shootout.alioth.debian.org/ and select the 'scorecard' tab then recalculate setting the "Memory scrore multiplier" to 1. You will see that newLISP comes out on top before most other languages, including some compiled ones.

Well, if I don't set the memory multiplier to 1, newLisp comes slower than Java, Python, OCaml bytecode and Perl. And it uses little memory because it provides few constructs to the programmer, which doesn't matter for toy benchmarks.



For "List Processing" newLisp cheats, it does not implement the spec: it always removes and prepends the element at the beginning instead of at the end where the spec requires. Look at the Bigloo implementation to see what was intended.



(Never mind that I don't like most Shootout benchmarks at all. They are too small; they too often require "the same way" instead of "the same thing", which yields unnatural code for many languages.)



And I think in newLisp it's impossible to implement it in O(N) complexity. It will need O(N^2) for all non-trivial recursive list processing functions. Except those which only change a global list at the beginning, without ever passing it as an argument, and those which are predefined; luckily only such functions were required in the wrong implementation of this task.



How well it will perform if it implemented the shootout honestly? Could I see only the "List Processing" benchmark fixed?



Other "benchmarks", even if implemented honestly, often don't tell the whole truth either. For example matrix multiplication. It would seem that newLisp is well suited for matrix processing. Never mind that you can't add two matrices in O(N^2), because unlike matrix multiplication it's not a builtin, and implementing it in newLisp would probably need O(N^3) time. Never mind that the whole matrix is physically copied when passing as a function argument - but a toy benchmark operates on a single global matrix and will not reflect that.
Title:
Post by: Lutz on October 20, 2004, 05:33:03 AM
Just what is your obsession? If you dislike newLISP so much, why are you so much concerned about it, reading the source, counting lines in newlisp-wiki, looking for things to complain about. Is it envy?



Relax, loosen up your mind, write some LISP code, those little parenthesis can make you a happier person :-), enjoy life ...



Lutz
Title:
Post by: Qrczak on October 21, 2004, 03:13:24 AM
Actually I started criticism because of the name "newLisp". If it was some random language, I would not notice, but it tries to paint itself as a successor of Lisp, while in my eyes it does not qualify as a Lisp at all, and uses ideas abandoned years ago.



Anyway, the problem with unfair benchmark is objective. I'm assuming it was a mistake. Could somebody please fix the list processing entry to actually conform to the spec, and add/remove elements at the right end too where appropriate? The list doesn't have to be represented by a builtin list type (and in fast several languages use a custom implementation because e.g. their primary list type is immutable). I'm curious whether my supposition about newLisp's inability to solve this in O(N) time is true.
Title:
Post by: newdep on October 21, 2004, 06:11:31 AM
I believe you are using telepathy to transport your words to your keyboard?





An expression like "and uses ideas abandoned years ago. "

is not very smart...
Title:
Post by: Qrczak on October 21, 2004, 12:00:23 PM
Quote from: "newdep"I believe you are using telepathy to transport your words to your keyboard?



An expression like "and uses ideas abandoned years ago. "

is not very smart...

So, is it possible to implement the List Processing shootout entry correctly and efficiently in newLisp, so the comparison can be fair? I'm not going to leave it this way.
Title:
Post by: rickyboy on April 22, 2005, 01:33:38 PM
Lutz,



I completely  understand your exasperation with the Qrczak postings.  His motivations do not seem genuinely constructive, but more along the lines of having an axe to grind (or maybe an axe to swing in this case).  A read of the Kogut FAQ (//http://kokogut.sourceforge.net/faq.html) may reveal something about the psyche of our buddy.  Here are a couple of quotes from the FAQ followed by my gratuitous comments ('cause I felt like it).


QuoteTo many programmers, including the Kogut author, sexprs are also aesthetically displeasing because they lead to very deep nesting of parentheses and many levels of indentation.

What a lame argument.  And unoriginal, to boot.


QuoteMarcin 'Qrczak' Kowalczyk is the creator of Kogut. So far, he's written all of the code by himself, which is impressive considering the shear volume of it.

He, he, he.  I love the self-aggrandizement here.   You better watch out, you newLISP folks -- you are dealing with an "impressive" genius here.   ;-)  



--Ricky
Title:
Post by: HPW on October 20, 2005, 05:42:36 AM
Ongoing discussions on comp.lang.lisp:



http://groups.google.de/group/comp.lang.lisp/browse_frm/thread/0b92742832c979d3/55c561c642d1f5da?hl=de#55c561c642d1f5da
Title:
Post by: eddier on October 20, 2005, 07:06:47 AM
I really admire Lutz and I think he is genuinely a nice guy,  but Qrczak points out two particular problems with newlisp for my current problems: (1) Copying the whole data structure on each function invocation makes my recursive algorithms very memory inefficient (most of the time worse than O(N^2)) since the structures I'm passing are usually more complicated than lists. When I run these recursive functions in mzscheme for example, they run much faster (although I'm sure tail call optimization contributes to the speedup as well.) (2) Using the context reference mechanism for passing arguments doesn't solve the memory problem because there is no way to free the unused memory.



It is not my intention to disrespect Lutz however, as I still use newlisp for small one-shot programs.



Eddie
Title:
Post by: Lutz on October 20, 2005, 07:27:17 AM
I think this is just a question of programming style. There is always the possibility to recode a recursive algorithm in a linear, iterative fashion. Traditional LISP and Scheme focus a lot on recursion, but it doesn't have to be this way. There are situations where recursion is the more elegant solution, but there are also situations where a recursive solution is inefficient or obscure, even when using reference passing of parameters.



The work I am currently involved with at Kozoru is heavy in algorithms (of natural language processing) which traditionally often where coded recursively, but we never ran into limitations of the sort you are describing or it was easy to work out a solutions more suited to newLISPs style of programming.



You should try newLISP on bigger programs/projects. The context mechanism works well to divide a bigger problem into modules.



Lutz



Ps: The memory referenced by context references will go away automatically when that memory is not referenced anymore, just like in any other scripting language in that situation.
Title:
Post by: Dmi on October 20, 2005, 08:18:22 AM
eddier:

AFAI understand there is another nice way of passing parameters by reference in newLisp - using dinamic scope:
(define (recur p1 p2)
  (do-something-with-list lst)
  (recur [value] [value]))

(define (start-recur params)
  (let (lst '())
    (recur [value] [value])
    lst))
(start-recur [values])

so "lst" isn't a persistent context variable, _and_ it is passed dynamically by reference to inner functions.



Lutz, if I'm right, this trick is a nice candidate to explain it in manual.
Title:
Post by: eddier on October 20, 2005, 10:14:36 AM
Lutz,



To unreferece memory, do I need to make sure no variable or function is bound to the referenced memory, or do I need to delete the context?


QuoteI think this is just a question of programming style. There is always the possibility to recode a recursive algorithm in a linear, iterative fashion. Traditional LISP and Scheme focus a lot on recursion, but it doesn't have to be this way. There are situations where recursion is the more elegant solution, but there are also situations where a recursive solution is inefficient or obscure, even when using reference passing of parameters. /quote]



Yes, I understand that recursion and iteration are equivalent as for as computation goes, but, code dealing with tree structures "seem to me" a lot easier to write and maintain if they are done recursively.



Dmi,



I see what you are getting at, and it seems to make sense. This seems to be a reasonable approach if the function is called from only one other function. IMHO, There are two pitfalls to this approach. (1) I wont be able to compose functions using this parameter passing approach. and (2) in medium - large programs it will hide parameters deep inside functions that should be made visible (by the reader) through a functions arguments. Although this could be mitigated by comments, it also obfuscats how a function should be called.

 

Eddie
Title:
Post by: Dmi on October 20, 2005, 01:11:30 PM
I haven't much expirience in dynamic scope usage... And I still trying to understand is it better or worse thing :-)

Multiple recursive function calls aren't completely impossible:
.....
(let (lst some-list)
  (recur params)
  (let (lst another-list)
     (recur params)))

But, yes - usage of both 'lst' values at the same time will be impossible :-(



About readability: yes, in this case we must insert an additional comments to function definition.



About contexts: their naming and persistance in a dictionary is a payment for garbage collection absence. I think it's not a disadvantage, but newLisp in common really requires it's own programming style.

Possible an experience, tricks and rules of "mythical" Kozoru programmers can be extracted and published? (once upon a time)
Title:
Post by: Lutz on October 20, 2005, 01:57:47 PM
Eddie,



To release memory  just make sure nothing is referencing it and it will go away. Thaks for the dynamic scoping/recursion trick.



DMI,



Unused memory is collected, newLISP does recycle unused memory, it just does it in a different way, not using traditional garbage collecting techniques like mark/seep and refernece counting.



Contexts/name spaces have nothing to do with this. In fact namespaces in other interpreted languages are done in a simiar fashion. They only differ in the way they organize symbols internally, some (like newLISP) use binary trees, others use hash tables.



Lutz
Title:
Post by: eddier on October 20, 2005, 02:18:34 PM
Thanks Lutz,



Dmi posted the dynamic scoping/recursion trick. Credit where credit is due.



Eddie
Title:
Post by: Dmi on October 21, 2005, 01:59:31 AM
QuoteUnused memory is collected, newLISP does recycle unused memory, it just does it in a different way, not using traditional garbage collecting techniques like mark/seep and refernece counting.



Contexts/name spaces have nothing to do with this. In fact namespaces in other interpreted languages are done in a simiar fashion. They only differ in the way they organize symbols internally, some (like newLISP) use binary trees, others use hash tables.


The problem (or technical misunderstanding) with contexts in newlisp is caused by the fact, that they are proposed not only for namespace-modules, but for regular variable tricks (for ex., for the access by reference).



Let me explain:
newLISP v.8.7.0 on Win32 MinGW, execute 'newlisp -h' for more info.

> (set 'ctx:var 1)
1
> (let ((ctx:var 5)) ctx:var)
5
> ctx:var
1
> (let ((ctx1:var 7)) ctx1:var)
7
> ctx1:var
nil
> (context? ctx1)
true
>

As we can see, context variables are the subject of dynamic scope and creation.

But the contexts themselves are NOT: if I create dynamic context variable, it will be destroyed when the code block is closed, but the context definition itself stays persistent.

Most people (and me too) have logical expectations that context will be destroyed as well.



The real expectation: the context symbols must behave like regular symbols:

- there must be a way to create them in MAIN (function 'context' does that)

- there must be a way to create dynamic context with 'let' declaration.

-- Such context must be initially clean (if not initialized imlicitly), despite of contents of the overlapped one.

-- This context must be destroyed according to common rules for 'let'.

-- It will be nice to have a native way to return context contents (can be partially imitated with 'source')

- there must be a way to reference to current dynamic scope context or to global defined context in MAIN.



Something like this :-)



P.S. yes, in most cases at present we can approve a small dictionary overhead for contexts that will rest after dynamic variable declaration, but  potentially, in situations of namespace overlapping, this will be the point of errors.
Title:
Post by: Dmi on October 22, 2005, 08:56:27 AM
Ghm... I got a joke on myself :-)

The nice context behavior, I proposed early is not applicable for "using context as for passing parameters by reference" because it is based on global/static property of the contexts.

That happens ;-)
Title: And now, an application of "The Test of Time"
Post by: rickyboy on October 04, 2011, 09:35:07 PM
Quote from: "Qrczak"I'm afraid [newLISP] is ... broken beyond hope of repair. It could be improved in minor ways ... but it's too little, to [sic] late.

Hmmm ... I know what's broken beyond repair -- and it's not newLISP -- it's Kogut, the language designed by the "impressive" genius Qrczak himself!  The download page on SF (//http), linked to from the main page (//http), indicates no significant developer activity in several years.  (Changing references to the author's email address in the source code listings 3 weeks ago, of course, doesn't count as "significant developer activity".)



Here's the user hangout: http://sourceforge.net/mailarchive/forum.php?forum_name=kokogut-users.  It's mostly Qrczak and some dude named "Daniel" talking back at each other as if it were their private forum.  Hardly any of the activity that goes on here.  (One wonders, then, if Qrczak and Daniel are not indeed the same person.)



The FAQ (//http) is also telling.
Quote from: "The Genius's Sole Minion"The focus of the development of Kogut is making the code work correctly, not make [sic] it well-documented. Once Kogut has a stable codebase with tons of libraries, Marcin will spend more time on improving the documentation, but until then, it's up to the community.

Ha!  There is no community.  Kogut is dead.  This brings to mind something that John Small said at the time (//http): "The open market of ideas will insure the drift wood is carried out to sea and forgotten."  This piece of driftwood is long gone.



Rest in Peace, Kogut!



(//%3C/s%3E%3CURL%20url=%22http://profile.ak.fbcdn.net/hprofile-ak-snc4/27526_128828310465519_4126_n.jpg%22%3E%3CLINK_TEXT%20text=%22http://profile.ak.fbcdn.net/hprofile-ak%20...%204126_n.jpg%22%3Ehttp://profile.ak.fbcdn.net/hprofile-ak-snc4/27526_128828310465519_4126_n.jpg%3C/LINK_TEXT%3E%3C/URL%3E%3Ce%3E)
Title: Re: newLISP now on Graham's list
Post by: cormullion on October 05, 2011, 01:01:53 AM
:)



Still, satisfaction ratings in the Kogut community must have been near 100% for most of its life, so that's something worth praising...
Title: Re: newLISP now on Graham's list
Post by: newdep on October 06, 2011, 12:05:14 AM
AHA!   So thats the language Qrczak was bragging about years ago...  



Lisp/scheme still has the longest breath ;-)  hahahah...