This is ridiculous

Started by Qrczak, October 01, 2004, 03:54:57 AM

Previous topic - Next topic

cormullion

#15
Quote from: "newdep"That "Qrczak" guy simply does not know where he can dump his frustrations...Hahaha...



He keeps posting about newlisp, what better adverticement can you get ;-)..Hahahaha...



http://www.codecomments.com/archive274-2005-10-658487.html">http://www.codecomments.com/archive274- ... 58487.html">http://www.codecomments.com/archive274-2005-10-658487.html


Got to admire his tenacity too: http://en.wikipedia.org/w/index.php?title=NewLISP&action=history">//http://en.wikipedia.org/w/index.php?title=NewLISP&action=history.



I'm envious of his copious spare time.. I hardly have enough time to write about the things I like these days.

starseed

#16
He has one point, though.



That context use a global variable name strikes me as suboptimal, too.

Especially as they are promoted as a way to pass large structures by reference, and to be used to implement objects.

Lutz

#17
Quote... context use a global variable name ...


In the big picture of things this is the right thing to do: it sets limitations but also makes the language easier to understand. Contexts in newLISP are mainly understood as namespaces to organize your code, create isolated modules and do simple object wrapping. On purpose contexts in newLISP are not elements of a potentially multi-level object hierarchy like in a full blown object oriented language. This doesn't mean that you cannot use them as objects in a limited, useful fashion, but most of all newLISP is a functional language.



Inside these constraints contexts allow you to do with them all you need (use them as object- and module wrappers, assign them to variables, pass them as parameters, create and delete them during runtime). Adding anything more would only complicate the usage of the language.



There are always voices who want to build all paradigms known to CS into newLISP, but the language would be less usable and harder to understand.



Lutz

starseed

#18
Quote from: "Lutz"
Quote... context use a global variable name ...

There are always voices who want to build all paradigms known to CS into newLISP, ...


Believe me, I am not proposing this. I like it small and sweet. ;-)


Quote from: "Lutz"In the big picture of things this is the right thing to do: it sets limitations but also makes the language easier to understand.


The thing is, to me, these limitations look artificial (1). And I believe, that artificial limitations make a language harder to understand.



As long as contexts are only meant as namespaces, you are right, your implementation is the right thing to do.



But by now some clever soul has figured out: they can be used for different things (objects, ...), and these are even officially documented.



I believe that from this point onward it actually makes it harder to understand newlisp, because I have certain ideas about how an 'object system' works, and newlisp breaks these assumptions. (In the context of "contexts" being officially blessed for being used as objects.)


Quote from: "Lutz"Adding anything more would only complicate the usage of the language.


I don't want to add anything - to the contrary - I want a limitation removed. ;-) (2)





Ingo



(1) I'm aware that, from an implementation point of view, things may look quite different. For example, the current way of defining default actions would not work with anonymous contexts.



(2) I hope you are aware that I am discussing this in the hopes to either understand the rationale behind these limitations, or to initiate changes that will make a great language even better.

Lutz

#19
Quotediscussing this in the hopes to either understand the rationale behind these limitations


It is a deliberate design decision to keep a newLISP object system flat and simple, mainly used for handling few big objects like programming modules, dictionaries (see 'sym' , 2nd and 3rd syntax of context), reference wrappers for big lists, strings etc., and this is how most newLISP users use contexts.



The main method of organizing data and functions in LISP should always be the list not a class/object hierarchy. newLISP goes a step further than traditional LISPs by offereing indexing into nested lists and offering implicit indexing on simple and nested lists.



The default functor in newLISP is a nice conceptual bridge between the context/namespaces system and the functional side of the language. newLISP is meant to be mainly a functional language.



I hope that after using newLISP for some time you can see these design decisions and philosphy not as a limitations but as something which makes newLISP a more powerful tool. Less is more ;)



Lutz

frontera000

#20
Well... I am a novice newlisp hobbyist but I have lots of time on my hand, so I will tackle some of your points.  I love Common LISP but I love newLISP better.  Mainly because it is so much smaller and simpler, yet allows me to code fun stuff quicker and easier.  But I still do love Common LISP as I do C.  Programming languages and fonts are those kinds of things that are very personal, you know...


Quote from: "Qrczak"The name "newLISP" would suggest that it builds on old Lisp experience, but in fact it ignores Lisp history and uses rules which have been tried and rejected as bad ideas, like dynamic scoping by default.


I personally like dynamic scoping by default.  I like newlisp allowing me to use contexts when I want to isolate variables.   Traditional lexically scoped programming languages can cause problems sometimes.  Some C programmers resort to using ridiculous variable names to avoid variable name clashes despite lexical scoping.   Here's a stupid example.



int xyz = 1;

int myfunc(int xyz) {   ..... ;  return xyz;}

int  anotherfunc(int num) { ....; xyz = num; return num; }



Bad programming?  C language allows something like this. BSD kernel code convention used to be to name globals like g_xyz with g_ prefix.  But the language allows you to do many bad things.



Even Common LISP allows dynamic scoping.  One could argue that newlisp can entirely be used using lexical scoping via contexts. So newlisp design is no worse than Common LISP in this regard.




Quote from: "Qrczak"
"Excepting symbols and built-in primitives, each object is referenced only once in the system and is always passed by value." - This implies that a recursive function which conses an element on each iteration to a growing list, or even descends a list by iterating 'rest', has O(N^2) complexity, or worse if elements themselves are big.



This also makes impractical to create graphs of mutable objects: you can't refer to a part of it for further modification because it will be implicitly copied. IMHO this very fact makes it not a Lisp at all. Every Lisp, and in fact most modern languages, pass objects by reference and collect unreachable garbage. In every Lisp you can look at the rest/cdr/tail of a list without copying it.


Again, I prefer to pass by value rather than reference most of the time. How many times in C have people run into unexpected changes in data pointed by a pointer and passed by pointer references?  Large software systems constructed this way can be a nightmare to debug.  The overhead of passing small values is just that -- small.  I prefer knowing that I will pass something and what I pass won't change until I change it. I don't want some thing changing it for me, unless I want it do, which is not most of the time.



If I want to pass big data by reference I can do so using contexts in newlisp. Besides, if I am extremely concerned about performance, I'd use C and interface to newlisp, which is easily done.  


Quote from: "Qrczak"
"Many of newLISP's built-in functions are polymorphic in type, accepting a variety - and sometimes all - data types and thereby greatly reducing the total number of different functions and syntactic forms necessary to be learned and implemented." - Except arithmetic, where it would make the most sense. Not just the defaults are limited, but there are no arithmetic operators which do the right thing (return an exact answer when possible).



No wonder why correctly behaving arithmetic is not provided: having only fixed-range integers, without even checking for overflow, and floats, is pathetic. This set of types just can't exactly represent 1/10 nor the size of my hard drive in bytes. Is it Lisp or C?


I wouldn't bother using bignum in newlisp even if it was offered. I'd rather do   bignum or heavy number chrunching in C.  Newlisp is not intended for that problem solving.  There are ways of doing it in other languages already. Why make newlisp into a bloated pig by throwing in all kinds of things most people won't use?  Why make it into a Common LISP?  


Quote from: "Qrczak"


cons which does a completely different thing depending on whether the second argument is a list is stupid. If there are improper lists, then fine - make it an error if it's not a list. But making a two-element list *except* when the second argument was a list makes absolutely no sense, because if you rely on cons making a two-element list, you will badly surprised if the second argument happens to be a list. There is no context where you really want either this or that, in all cases you statically know which behavior you want, so it makes no sense to put them both in one function. I bet that the two-element behavior of cons is not used in practice, because it's not robust.


in practice, this is really not a problem at all.  An example would help to prove your case.  It sounds a bit pedantic.


Quote from: "Qrczak"
A hash table is associated with a global variable. What?! I can't make a list of hash tables without inventing unique names for them? This applies to objects as well. Is it Lisp or Tcl?


How many times have to had to make use of locally scoped hash or object? That is a bad programming practice to begin with. I wouldn't do that, so I like the way newlisp does things.


Quote from: "Qrczak"


The implementation of hash tables is interesting:

> (load "modules/hash.lsp")

true

> (HASH:make 'h)

h

> (h:put "(" 'open)

open

> (h:get ")")

open

...Huh?!






newLISP v.8.9.3 on Win32 MinGW, execute 'newlisp -h' for more info.



> (context 'HASH "(" 'open)

open

> (context 'HASH "(" )

open

> (context 'HASH ")" )

nil

> (context 'HASH ")" 'close)

close

> (context 'HASH ")" )

close

>


Quote from: "Qrczak"
"Unless otherwise specified for a function, an index greater than N-1 returns the last element, and an index less than -N returns the first when in lists, but out of bounds indices in arrays will cause an error message." - long live the consistency. I see no purpose in returning the last element of a list when the index is too large - it will just mask programming errors.


That is why one would use (last some-list) instead.  In practice, programming errors don't really happen much because of returning last element when index is too large.  In fact, it has helped me a bit when using newlisp interactively to parse XML data.  If you can provide an example of when this feature can cause programming logic errors, you might get a better response.  If you think about it, the feature does what you expect in real life.  


Quote from: "Qrczak"




"constant: Works exactly like set but protects the symbol from subsequent modification. A symbol set with constant can only be modified again using constant." - Funny, a constant which is not constant.


it is constant that can be changed explicitly.  A novel feature I think.


Quote from: "Qrczak"


"If num is supplied, dec always returns the result as floating point even for integer input arguments." - Why?


> (setq x 10)

10

> (dec 'x 2)

8

> (dec 'x 2.0)

6

> (dec 'x 0.1)

5.9

>


Quote from: "Qrczak"
"If the expression in exp evaluates to a string, it will be replicated int-n times in a string and returned. For any other type in exp a list of int-n of the evaluation of exp is returned." - How to make a list with N references to the given string (sorry, N copies of the string)? Again an inconsistent behavior which doesn't make sense - if you want to make a list using dup, ensure that the element is *not* a string, because you will get a different thing in this case.


dup is not used to make a list with N references of the string.  it is used to duplicate string or other things.



> (dup "string" 10)

"stringstringstringstringstringstringstringstringstringstring"

> (dup "string" 10 true)

("string" "string" "string" "string" "string" "string" "string" "string"

 "string" "string")

>


Quote


A similar problem is inherent in flat. The documentation says: "To convert a list back to an array apply flat to the list". It will break if the array is supposed to contain lists as some elements. These operations which do a particular thing for almost all values, *except* when values are of a particular type, are just misdesigned.


flat returns a flattened form of a list.


Quote
"get-char: Gets a character from an address specified in int-address." - An address represented in 32 bits? So it will not work on 64-bit machines. What a pity. C used to almost unify pointers with integers about 30 years ago.


Internal address can be defined to be anything when running inside newlisp which might be hosted on a 64 bit machine.  On 64 bit machines one can build a newlisp with 64 bit integers in theory if that is what must be done. You are confusing machine address vs. newlisp address inside interpreter.


Quote


File handles represented as ints and I/O operations using system calls directly, with no buffering? I knew that this is C 30 years ago.




There is buffering.  write-buffer exists.  File handle is something returned by open.  It being integer does not make newlisp bad. Most Unix machines use file descriptors that are integers.  One can use a 64 bit value, but what would be the point?  How many files will you have open?  It would be just as easy to refer to files with names associated with a symbol, which is what i think of file handles, as returned by open.  In fact i just use (read-file "filename") who cares about file descriptors?


Quote


In general, I'm disgusted. There is some work in wrapping C library functions, but the core language design, number and list processing is awful.


Why would one be disgusted by a software development tool like newlisp which is provided for free?  You don't have to use it.  Different people have different tastes.  I love it and many others do.  I thank Lutz for providing it for free!   If you feel so disgusted, perhaps you can design a better language, implement it to run on most major platforms, and give it away for free like Lutz.




Quote
The name newLisp is misleading: it's not a Lisp and it's not modern. It tries hard to have neither garbage collection nor pointers, which implies that pointers must be simulated using unique constants which denote globally accessible structures, as if it was Tcl or ancient Fortran. It throws away the progress made in programming language design in the last 30 years.


In my view, newLISP is closer to original LISP than Common LISP because it is compact, efficiently implemented and attempts to solve only some problems, not all, in contrast to Common LISP which tries to solve many problem domains while keeping everyone happy by keeping all the old compatibility routines.  Common LISP has not evolved with the rest of the world and it lacks modern programming features that newlisp has.  And newlisp design allows for other languages for other things that they are good for (e.g. C for certain things).

arunbear

#21
Quote from: "pjot"
Concerning Perl. Reading the PERL man-page (man perlsub):



"The Perl model for function call and return values is simple: all functions are passed as parameters one single flat list of scalars, and all functions likewise return to their caller one single flat list of scalars.  Any arrays or hashes in these call and return lists will collapse, losing their identities--but you may always use pass-by-reference instead to avoid this."





So by default, Perl passes by value.


Wrong. If you read further into persub, you will find this:
Quote
Assigning to a list of private variables to name your arguments:

sub maybeset {
    my($key, $value) = @_;
    $Foo{$key} = $value unless $Foo{$key};
}

Because the assignment copies the values, this also has the effect of turning call-by-reference into call-by-value. Otherwise a function is free to do in-place modifications of @_ and change its caller's values.

Try running Qrczak's snippet - code does not lie.

pjot

#22
Quote
Wel, I am tired of this "yes/no" game, this will be my last posting regarding this issue.

frontera000

#23
Many people are confused about the terms "pass by value" vs. "pass by reference".



Qrczak's perl example:


Quote
sub change($) {++$_[0]}

my $x = 5;

change($x);

print "$xn";



Result: 6


Here's a case where $x is passed as a true reference. The subroutine can internally assign a value to $x and the result is reflected after it has returned. The $x is changed.  This is similar to the way C++ reference (via &) and C's way of taking address (via &) and passing to a function which expects a pointer.  After return from subroutine, the reference value can be changed to a new value.



This is a very questionable programming construct, although I have used it myself.  In a large software this kind of programming can cause a lot of problems.



Note, the CLISP example by Qrczak:


Quote


[1]> (defun test (x) (rplaca x 44))

TEST

[2]> (defvar obj (list 2))

OBJ

[3]> (test obj)

(44)

[4]> obj

(44)


Here is the case where the contents of the list can be changed by the subroutine.   However list (the reference symbol for the list) cannot be changed by the subroutine.   This is because LISP passes the  reference to something after making a copy of that reference.  The list inside function test (called x) is a copied reference.  Assigning new value to x will not change the original reference (called obj).  



Java works  the same way.  Do a google search on java and pass by reference. You will see a lot of messages. Including the quote by James Gosling saying something flippant about it.   Gosling is used to thinking the lisp way -- he wrote Unipress emacs (ancestor of GNU emacs) which used Mocklisp as extension language (which by the way used dynamic scoping).



The problem with this kind of  copied "reference passing" (in contrast to the way perl  does it) is that it is unclear from the syntax.  The same syntax is used to denote pass by value and pass by reference in these languages.  Of course, the advantage is that it is probably safer this way.



C# on the other hand uses "ref" in similar way as C++ (via &).  Just to avoid such confusion.





There are different ways : pass by value (CORBA), pass by copied reference (java, common lisp, etc.), pass by reference (perl, C++, C#, and C when used with &), and pass by value + explicit pass by reference copy using contexts (newlisp).



Guess which one I prefer?  :-)



By the way, as much as I love Common LISP, I prefer newLISP. I was thinking perhaps a better name for newLISP would be UnCommon LISP.  As the lisperati folks seem to hate us so much. But I guess that name is taken by Scheme.

newdep

#24
afteralll Qrczak me not drop by at this forum anymore, he scores quite some reading

hits on this forum..hahaha... but still its all comes back to this compare "not? apple peer"..
-- (define? (Cornflakes))

konrad

#25
Quote(4) Integer and Float Arithmetik

You can redefine the integer operators (see manual) and then you get just the same number treatment you have in other scripting languages.


This is about the only critisism I would agree with. + - etc should be smarter. And by smarter I don't meen should return floats at is as dump as always casting to an integer. Having spend a long time in Python probably gives me some Bias. but havign arithmatic operators which do things in Integer mode if you give them integers and in float mode if you give them floats seems to be the best alerantive.



In Python (yes I'm comparing languages) Ints and cast to floats implicitly If and Only if there they are being combined with somthing which already is a float. so that



1+1 = 2



1+ 1.5 = 1.0 + 1.5 = 2.5



Granted doing this efficiently could be non-trivial. but I would on balance rather it be the general case and have a set of explicitly Integer variants for cases where I know I really want Integer operations.



iadd etc perhaps.