Something called Arc. //http://arclanguage.org/
It is discussed for a long time here:
http://www.paulgraham.com/
So everyone can try if the huge expectations into the latest kid of Paul Graham get into real.
http://www.paulgraham.com/arc0.html
http://ycombinator.com/arc/tut.txt
I can remember a seminar or paper where Paul mentioned Arc as a language
he carried a long time with him but never got to the point creating it.
...Now when i look at the link I see another Lisp-Cream... And thats quiet
logical because you cant twist a lambda much ;-)
A side note: MzScheme is under LGPL.
So ARC should it be also?
I understand that they use MzScheme because of the initial stage of Arc.
But i like that they created it in Scheme ;-)
My expectations were quite high, but there are some good things to look at ;-)
I believe that for example Clojure innovates more:
http://clojure.sourceforge.net/
Fanda
I read through the arc tutorial and am thoroughly unimpressed.
Most of the stuff I like newLISP already has, and there's no mention of any integrated web stuff (like net-* or *-url).
The one interesting feature I thought might be useful to put in newLISP is some version of Arc's uniq macro functionality. Quoting in full
Quote
The way to fix repeat is to use a symbol that couldn't occur in
source code instead of x. In Arc you can get one by calling the
function uniq. So the correct definition of repeat (and in fact
the one in the Arc source) is
(mac repeat (n . body)
`(for ,(uniq) 1 ,n ,@body))
If you need one or more uniqs for use in a macro, you can use w/uniq,
which takes either a variable or list of variables you want bound to
uniqs. Here's the definition of a variant of do called do1 that's
like do but returns the value of its first argument instead of the
last (useful if you want to print a message after something happens,
but return the something, not the message):
(mac do1 args
(w/uniq g
`(let ,g ,(car args)
,@(cdr args)
,g)))
I've always been ok with using leading underscores, but I could see the use in being able to use syntactic sugar like the following ...
(define-macro (foobar (m-uniq x y)) (set x (eval y)))
My $0.02
I back tracked this from a posting on reddit you might find interesting...
Arc's not even an acceptable modern Lisp (xent.com)
//http://reddit.com/r/programming/info/677u6/comments/
[FoRK] Arc's out, Nu vs. newLISP, and a retort to Paul Graham's elitism
//http://www.xent.com/pipermail/fork/Week-of-Mon-20080128/048238.html
Quote
...I'm coming to the conclusion that I reallyDontLike: ObjCStyleCode:
dueToTheRidiculouslyLongSelectors. Really, I find it hard to read.
OTOH, here is (mostly) the same thing in newLISP (sans shebang script
scaffolding, urlencoding and arg processing...)
(define (shrink-url url)
(get-url (append "http://tinyurl.com/api-create.php?url=" url)))
newLISP is really a weird beast. It is defiantly NOT lexically
scoped, rather has a weird fluid or dynamic scoping mechanism built
on the idea of first-class environments called "contexts." Contexts
form the basis of a kind of pseudo-module system, pseudo-object
system, etc. But it's funky, feels sort of retro. OTOH, it's really
seriously batteries-included, though; has built-in remote execution,
a web server with RESTful app framework --- that's built into the
interpreter, command-line option, NOT a library. It's even got
funky stuff like bayesian classifier functions and finance functions
like npv all built in --- not broken out into libraries or a standard
lib or anything, just built-in functions. And from the docs it's not
clear that the author understands the difference between threads
(which it doesn't support) and processes (which it does via fork and
exec wrappers.) Nonetheless it's one of the most immediately-useful
out-of-the-box programming environment I've seen in a while for a
wide variety of programming tasks.
And it's got the Best. Logo. Ever.
One of these days, if folks don't get their language crap together,
I'm going to have to scratch that lifelong itch and roll my own.
Maybe arc will help me avoid that.
Quote
But it's funky, feels sort of retro.
That's my kind of stuff! :)
Here's some more "nothing wrong with retro" retro:
//http://www.fat-man.co.uk/docs/product_07/iTube_Mothership.shtml
Arc is supposed to be the 100 year language. Well, yeah, it's taken nearly that long to develop it, and when we get it, its a layer over scheme.
Advantages/disadvantages when compared to newLISP?
Advantages: lexical scoping
Disadvantages: built on scheme, speed, less mature
Quote from: "Jeff"
Advantages: lexical scoping
Lexical scoping is nothing but a computer "science" shibboleth.
Then why use contexts or OOP at all? OOP is entirely based in static scope.
The only other active languages that I know of that uses dynamic scoping is Perl and emacs lisp (if you consider that an active language).
I'm not saying that dynamic scoping is bad, but lexical scoping certainly cannot be denied as a valid and useful convention.
Quote from: "Jeff"
Then why use contexts or OOP at all?
Why indeed. Especially when implemented in a half-assed kludgy way as in C++ or Java.
As for newLISP, I tried using contexts as objects but the performance was dog slow. Whether FOOP is worth it remains to be seen. I certainly haven't seen any advantages to it yet. Every example seems to be more wordy and complex than just using lists and defined functions.
Pre-compiled components in packages or purchased code are OK, especially for gui stuff but that's not really what OOP is all about.
I remain sceptical and especially so as far as lightweight scripting languages like newLISP are concerned.
Don't confuse my defense of lexical scope as advocating OOP :). Arc and newLISP have different aims.
That being said, I was skeptical about FOOP, too. After playing with it a bit, I found I was able to model types like I do in OCaml (which has very expressive types).
There was certainly overhead, but it was not atrocious. I think that it was certainly worth the enhanced readability in my code.
I wouldn't use it for everything, especially applications creating large numbers of prototyped contexts. But some things really lend themselves to the object model, like ORM.
I believe the various styles of programming (FP, OOP, SP, FOOP) appeal to different mindsets. OOP, for instance, appeals to creative types, while FP appeals to the mathematically minded. No approach, unless fundamentally flawed, should be considered "wrong," as this implies that a person's way of thinking is "wrong" if they prefer a style different from our own.
Is vanilla ice cream the "One True Way"? Or chocolate? How about Strawberry? Neapolitan, perhaps? The answer is: all of them! One for every taste. And in the case of programming, one for every mindset.
FP and OOP exist at opposite ends of the programming style spectrum. If the OOP part of FOOP can develop sufficiently enough to handle its side of the spectrum (newLISP already handles the other side), then any style in between should be possible. Even future styles!
Then again, this could turn out to be a big pile of FOOP ;-)
m i c h a e l
I disagree with the statement that OOP is for creative types and functional is for mathematical types. I'm not mathematical. I find functional languages to be much more elegant than that majority of OOP languages. I think that OOP can easily become a crutch and that many programmers try to model *everything* at the expense of logic, efficient and elegant algorithms, and overall program design.
I do like FOOP, though. It reminds me of the clarity of syntax of OCaml types.
My use of "creative types" and "mathematically minded" was perhaps a bad choice of words. Maybe "non-programmers" and "programmers" would have been better? In fact, let's throw out the entire sentence all together. What's important is that you have a preference, and that your preference should be respected.
The impression I've received over the last 16 years is that programmers tend to hate OOP with a capital "H," and that artists and children seem to prefer playing with objects. I've never understood the disdain programmers have for OOP. (In fact, it reminds me of the disdain Common Lispers have for newLISP!) As I said in the above post, I assume it stems from having a mindset at odds with the OOP style.
m i c h a e l
Nope. My problem with OOP is that it tries to force round pegs through square holes and it leads to a lot of poor programming practices. It's a way of trying to fix the maintainability of procedural code, and as an abstraction it can be very inefficient.
I like Python's balance of functional and OOP as well as newLISP's. Neither require it, but they both have it available for problems that are best solved by modeling.
My feeling is that OOP is best used as an expressive way to model data types. The reason lisps generally look down on OOP is that most data can be just as easily represented as a nested list (which can also store functions that act upon that list).
Modeling the logic of the program as objects is trying to force a verb to be a noun. Functional programming or plain procedural code is best for this. I dislike Python's ", ".join(some_list) syntax where it becomes difficult to tell what is acting on what.
As for Arc ... I liked the tutorial, although I didn't download the software. It's interesting that they wrote a tutorial aimed at "readers with little programming experience and no Lisp experience", given Arc's natural audience. I might borrow a few passages for my own tutorial.
At first glance, to my non-technical eye, the Arc language looks a bit like newLISP for people who don't like typing. I prefer the more English-like style of newLISP - I'm a fairly fast touch-typist and my reading's not too slow, so I can handle the extra characters required...
If it's so similar to newLISP then why even bother writing it in the first place when newLISP is so much more mature?
A lot of its popularity is due to this being Paul Graham's big project and the anticipation that has built up over the years it's been in development.
To me, it's a great thing to have a language allowing you to use both OOP and FP techniques. That's why I wanted to have OOP in newLISP so much. I can actually model some interesting things with it.
My guess is that languages like Python and Ruby are so popular, because they also allow both.
I see a great beauty in declaring class with methods written in functional style. It looks so much better than imperative for/while loops.
[Warning: commercial follows]
I have also been searching and found interesting language called Scala. It is compiled, runs on the JVM, can import Java libraries, supports OOP and FP in a new way (pattern matching on objects/types) and has interesting ways how to extend it. I don't understand it yet and it seems much more complex than newLISP, but I think it's worth looking into :-)
Fanda
PS: It is funny, but after some programming using my OOP framework, I came to understanding that FOOP has some interesting properties, which I would like to add to OOP framework :-)
Scala is an interesting language. I played with it for a while, but my implementations in it always ended up being very slow (which may or may not say something about the language).
And what about CloJure ? I think it is a competitor to newLisp far more daunting than Arc ... among other things, because of its lexical scoping (by default).
This issue of "dynamic vs. static scoping" concerned me a lot these days, probably because I finally realized what it was ;) ... and I tried some comparisons between Scheme and newLisp :
(define foo
(let ((a 10)) ; private environment of the lambda
(lambda (x) (+ x a))))
(define a 1000)
(foo 1)
=> 11 ; with Scheme, what seems logical
=> 1001 ; with newLisp, what is surprising at first glance
And I tried to find a way in newLisp to approach the Scheme's way, I found :
(define foo
(letex ((a 10))
(lambda (x) (+ x a))))
(set 'a 1000)
(foo 1)
=> 11 ; as in (lexically scoped) Scheme
But I don't know if this is the correct solution ?
I have also tried other things :
;; NewLISP
(define x 1)
(define (f) x)
(f) ; => 1
(define (g x) (f))
(g 0) ; => 0
;; SCHEME
(define x 1)
(define (f) x)
(f) ; => 1
(define (g x) (f))
(g 0) ; => 1
I can't get equivalent results in Scheme and in NewLisp, due to different scoping, and 'letex' is of no use to me here.
Has this question of "scoping" a real importance when programing with NewLISP ? What are the inconvenients of dynamic scoping ?
Dynamic scoping is much faster and simpler to implement, meaning faster execution times and less memory usage. Lexical scoping requires quite a bit of analysis and makes lookups slower (in interpreted languages).
Dynamic is more difficult to debug, because you have to track down where a function was called from to find the failure, and because a function may be called in so many different environments.
However, if you use properly decoupled functions, utilize throw-error with useful messages, and do not use "spaghetti code", dynamic scope is not difficult to maintain.
Particularly in web programming, where requests are stateless, combining dynamic scope with the save function means you can implement continuations by reinstating the entire environment as it was on the last request (to recreate the last state on each page load and continue from there).
Lexical scoping is very nice, though, especially for large applications.
In dynamic scope (especially if there are many functions calling each other) you must be very careful when you give names to local entities. Otherwise you may shadow some global entity and confuse the functions which rely on it.
newLISP has contexts which provide lexical closures to prevent namespace litter.
Here is link to help understand the concepts involved.
//http://en.wikipedia.org/wiki/Scope_(programming)
Quote
As such, dynamic scoping can be dangerous and almost no modern languages use it. Some languages, like Perl and Common Lisp, allow the programmer to choose lexical or dynamic scoping when (re)defining a variable. Logo is one of the few languages that use dynamic scope.
I think 'dynamic scoping' must be the number one most-cited reason for people not trying or using newLISP. You can't help wondering whether more people might use newLISP were this apparently 'major obstacle' to be changed in the future.
Although, judging from the statements of some of the people who've been actively disliking newLISP recently (don't they have anything better to do?), it might be a good thing to deter a few of them from the very start...
Me - I don't worry about scoping at all. :)
Thank you all for your answers, I can use them as arguments in discussions about this topic (scoping, etc.) without appearing too ignorant ... ;)
In fact, although it does not use static scoping by default,NewLisp can still implement (functional) object-oriented programming which is easier to use and clearer than the classic OOP. Generally, some purists are surprised at -and even disturbed by- that.
I take advantage of the occasion to greet and thank Pavel for having done Elica and for his replies to my (sometimes "simpleminded") questions on other forums.
:)
Regards,
NewBert, aka Taobert, aka Bertrand ...
If you use Lisp... Then the name "newLISP" implies that you are using the old and possibly flawed Lisp... And you become defensive, denying that anything is old or flawed about Lisp...
If you use Scheme... You agree something is flawed with old Lisp... But you become defensive because Scheme fixed Lisp...
If you use newLISP... You are so excited that you can finally write your "Lisp" programs so that they work and do something more useful than boring textbook exercises, that it's hard to hold back your joy of wanting to tell everyone you meet!!!
Needless to say, that kind of EUREKA! feeling using "newLISP" irritates Lisp and Scheme programmers to no end...
Quote from: "newBert"
I have also tried other things :
;; NewLISP
(define x 1)
(define (f) x)
(f) ; => 1
(define (g x) (f))
(g 0) ; => 0
;; SCHEME
(define x 1)
(define (f) x)
(f) ; => 1
(define (g x) (f))
(g 0) ; => 1
I can't get equivalent results in Scheme and in NewLisp, due to different scoping, and 'letex' is of no use to me here.
Finally, I found two solutions :
(set 'x 1)
(define f (fn () x))
(define g:g (fn (g:x) (f)))
(f) ;=> 1
(g 0);=>1
(set 'x 1)
(define f
(letex (x x) (fn () x)))
(define g (fn (x) (f)))
(f) ;=>1
(g 0);=>1
And my (maybe) simplistic conclusions :
- NewLISP gives the choice between dynamic scoping and static scoping (thanks to contexts and/or 'letex')
- although NewLISP is dynamicly scoped by default, it can be explicitly lexically scoped too.
Isn't it an advantage compared to Scheme, which is exclusively lexically scoped ?
Well, I quit there with my obsessions about scoping and I spend it on something else :-D
Regards,
Bertrand
I don't think you can do memoization with newLISP because of its dynamic scope, as I discovered when working on streams (//http). I'd love it if someone proved me wrong on this, but you can't use contexts to do it, because that would turn out to be way too ugly and inflexible, I think.
Here are two ways to implement memoization in newLISP. The first without contexts:
(define (mysqrt x ())
(or (lookup x (nth (mysqrt 0 1)))
(last (push (list x (sqrt x)) mysqrt '(0 1 0)))))
> (mysqrt 10)
3.16227766
> (mysqrt 10)
3.16227766
> (mysqrt 20)
4.472135955
> (mysqrt 30)
5.477225575
> mysqrt
(lambda (x ((30 5.477225575) (20 4.472135955) (10 3.16227766))) (or (lookup x (nth
(mysqrt 0 1)))
(last (push (list x (sqrt x)) mysqrt '(0 1 0)))))
>
An association list contains the memorized results, stored in the function itself.
The second solution uses contexts. I find it much cleaner, because the context packaging allows me to store other things besides the memoization memory, and it separates the function from the memoization storage
(define (my_sqrt:my_sqrt x)
(or (and my_sqrt:mem (lookup x my_sqrt:mem))
(last (push (list x (sqrt x)) my_sqrt:mem))))
> (my_sqrt 10)
3.16227766
> (my_sqrt 10)
3.16227766
> (my_sqrt 12)
3.464101615
> my_sqrt:mem
((12 3.464101615) (10 3.16227766))
Lutz
Fine, we can really play with NewLISP as we would play with Scheme (or Lisp), but with more facilities. Things and concepts become clearer with NewLISP.
;)
Quote from: "Lutz"
Here are two ways to implement memoization in newLISP. The first without contexts:
If you look at how memoization is used in streams (see the function memo-proc in this thread (//http)), I think you'll see that lexical scoping would be a better solution, the main point being that with streams, the function that is "memoized" is arbitrary.
Quote
(define (mysqrt x ())
(or (lookup x (nth (mysqrt 0 1)))
(last (push (list x (sqrt x)) mysqrt '(0 1 0)))))
An association list contains the memorized results, stored in the function itself.
This is a really cool solution actually, you're redefining the function each time it's called with a new value, but there are two problems that I see with this: first, it's obviously less efficient to have to do a lookup in a list each time the function is called (the function then becomes at *least* O(n)) than to just get the answer from a statically saved variable that goes with the function, and secondly, you can only use this on functions that have this sort of definition, I don't think you can use this method to memoize arbitrary functions.
Quote
The second solution uses contexts. I find it much cleaner, because the context packaging allows me to store other things besides the memoization memory, and it separates the function from the memoization storage
(define (my_sqrt:my_sqrt x)
(or (and my_sqrt:mem (lookup x my_sqrt:mem))
(last (push (list x (sqrt x)) my_sqrt:mem))))
Again, I believe this falls under similar criticism, here you can only memoize a function that's designed to be memoized.
Quote from: "Lutz"
Here are two ways to implement memoization in newLISP. The first without contexts:
(define (mysqrt x ())
(or (lookup x (nth (mysqrt 0 1)))
(last (push (list x (sqrt x)) mysqrt '(0 1 0)))))
> (mysqrt 10)
3.16227766
> (mysqrt 10)
3.16227766
> (mysqrt 20)
4.472135955
> (mysqrt 30)
5.477225575
> mysqrt
(lambda (x ((30 5.477225575) (20 4.472135955) (10 3.16227766))) (or (lookup x (nth
(mysqrt 0 1)))
(last (push (list x (sqrt x)) mysqrt '(0 1 0)))))
>
Wow Lutz, that just blew my mind. :D
Quote
here you can only memoize a function that's designed to be memoized
the following creates memoizing functions from any function and arbitrary number of arguments:
(define-macro (memoize func mem-func)
(set (sym mem-func mem-func)
(letex (f func m (sym "mem" mem-func))
(lambda ()
(or (and m (lookup (args) m))
(last (push (list (args) (apply f (args))) m)))))))
> (memoize add my-add)
> (my-add 3 4)
7
> (my-add 5 6)
11
> (my-add 3 4)
7
> my-add:mem
(((5 6) 11) ((3 4) 7))
>
The fact that the lookup speed is O(n) is only an implementation detail of the lookup. Instead of using an association list, you could use b-tree name-space lookups when many different results have to be cached:
(define-macro (memoize func mem-func)
(set (sym mem-func mem-func)
(letex (f func c mem-func)
(lambda ()
(or (context c (string (args)))
(context c (string (args)) (apply f (args))))))))
(memoize add my-add)
> (my-add 3 4)
7
> (my-add 5 6)
11
The cache is now the 'my-add' namespace itself. There is no danger of symbol clash as all start/end with parenthesis.
Note that in both solutions the memoizing cache memory can easily be made persistent by simply saving the context using 'save'.
Lutz
ps: simplified context/hash creation
Quote from: "Lutz"
the following creates memoizing functions from any function and arbitrary number of arguments
Too bad for rand.

(//%3C/s%3E%3CURL%20url=%22http://forum.skycode.com/img/90.gif%22%3Ehttp://forum.skycode.com/img/90.gif%3C/URL%3E%3Ce%3E)
Fascinating stuff Lutz, though I still think it seems a bit contrived when compared to using lexically scoped functions. Nonetheless, I set about to re-write the (delay) function for streams using your macro, and I did some basic benchmarks too.
Here is the new delay function (let me know if you see any ways of improving it btw):
(define-macro (delay)
(let (memo-func-name (sym (string (args 0 0) "-memo")))
(if (nil? (sym memo-func-name memo-func-name nil))
(letex (a (args 0 0) b memo-func-name) (memoize a b)))
(letex (params (rest (args 0)))
(letex (f (cons memo-func-name 'params))
(fn () f)
)
)
)
)
Here's an example of its use:
> (delay (+ 1 1))
(lambda () (+-memo 1 1))
I discovered that it's actually just a tad faster to use the first version of memoize, I'm guessing this is because it doesn't have to create strings out of the parameters. However, once you start calling the same function with many different parameters, I'm sure the second version will become faster.
Using this new delay function results in a significant reduction in performance on the first-call (to be expected), but in a significant speed-up if the exact same *exact* code is executed again (meaning even the parameters must be the same). I think that in most situations when an algorithm is run, it's probably only executed once with the same parameters. Therefore I would probably recommend against doing it in newLISP.
Again, on the topic of lexical vs. dynamic scoping, I think that this example clearly demonstrates some of the advantages of lexical scoping, which would most likely result in much faster times on the first run.
Here's an example of what I'm talking about (I'll post more detailed examples in the steam thread (//http)):
> (time (odd-fibs 50))
16
> (time (odd-fibs 30))
11
> (time (odd-fibs 30))
1
> (time (odd-fibs 25))
11
Quote from: "Lutz"
the following creates memoizing functions from any function and arbitrary number of arguments:
(define-macro (memoize func mem-func)
(set (sym mem-func mem-func)
(letex (f func m (sym "mem" mem-func))
(lambda ()
(or (and m (lookup (args) m))
(last (push (list (args) (apply f (args))) m)))))))
Lutz
Yeees! And it works with the example from Scheme that I found it difficult to adapt in NewLISP.
;;; Scheme ;;;
;(define memoize
; (lambda (proc)
; (let ((cache '()))
; (lambda (x)
; (cond
; ((assq x cache) => cdr)
; (else (let ((ans (proc x)))
; (set! cache (cons (cons x ans) cache))
; ans)))))))
;
;(define fibonacci
; (memoize
; (lambda (n)
; (if (< n 2)
; 1
; (+ (fibonacci (- n 1)) (fibonacci (- n 2)))))))
;
;(fibonacci 100) ; => 573147844013817084101
;(fibonacci 20) ; => 10946
(define-macro (memoize func mem-func)
(set (sym mem-func mem-func)
(letex (f func m (sym "mem" mem-func))
(lambda ()
(or (and m (lookup (args) m))
(last (push (list (args) (apply f (args))) m)))))))
(memoize
(lambda (n)
(if (< n 2)
1
(add (fibonacci (sub n 1)) (fibonacci (sub n 2)))))
fibonacci)
(fibonacci 100) ; => 5.73147844e+020
(fibonacci 20) ; => 10946
Thank you very much, Lutz :)
Bertrand
I added the context based memoize function here:
http://www.newlisp.org/CodePatterns.html
and here:
http://www.newlisp.org/index.cgi?page=Code_Snippets
and with swapped order of arguments for easier readable, direct
embedding of recursive lambda functions as in:
(memoize fibo
(lambda (n)
(if(< n 2) 1
(+ (fibo (- n 1))
(fibo (- n 2))))))
This puts the name of the new function better recognizable on top.
Lutz
That's neat! Thanks for mentioning my analysis (I've updated it to be more accurate btw... woke up this morning to the stark realization that I'd made a mistake! :-O)
A cool thing about the (delay) function is that if you want control over how things are memoized based on the function arguments (for example if one of the arguments is irrelevant), you can define your own custom version of the memoized function and it will be used instead. E.g.:
(define (my-func-memo:my-func-memo arg1 arg2)
; memoize based on arg1 only
)
AFAICU (as far as I can understand)
(define-macro (memoize func mem-func) ...)
defines a memoizational version of a function.
As always I am curious. My question is whether it is possible to define something like:
(define-macro (memoize func) ...)
(define-macro (dememoize func) ...)
thus you can set a function to become memoizational without creating a new function. By reusing the same name if you have some user code it will start to work with the memoized function without change in the source.
And for symmetry, the dememoization means to recover the original function.
Quote from: "Fanda"
My expectations were quite high, but there are some good things to look at ;-)
I believe that for example Clojure innovates more:
http://clojure.sourceforge.net/
Fanda
Some Clojure/NewLISP comparisons :
;;; Clojure ;;;
;user=> (def x 5)
;user=> (def lst '(a b c))
;user=> `(fred x ~x lst ~@lst 7 8 :nine)
; (user/fred user/x 5 user/lst a b c 7 8 :nine)
;;;
(set 'x 5)
(set 'lst '(a b c))
(println (flat (list 'fred 'x x 'lst lst 7 8 'nine)))
;;; CloJure ;;;
; (let
; [[a b c & d :as e] [1 2 3 4 5 6 7]]
; [a b c d e])
;
; => [1 2 3 (4 5 6 7) [1 2 3 4 5 6 7]]
;;;
(bind (map list '(a b c d e) '(1 2 3 (4 5 6 7) (1 2 3 4 5 6 7))))
(println (list a b c d e))
(local (a b c d e)
(map set '(a b c) '(1 2 3))
(set 'd '(4 5 6 7))
(set 'e (flat (list a b c d)))
(println (list a b c d e)))
(let (a 1 b 2 c 3 d '(4 5 6 7) e (flat (list a b c d)))
(println (list a b c d e)))
;;; CloJure ;;;
; (let
; [vec [1 2 3 4]
; map {:fred "ethel"}
; lst (list 4 3 2 1)]
; (list
; (conj vec 5)
; (assoc map :ricky "lucy")
; (conj lst 5)
; ; the original are intact
; vec
; map
; lst))
; => ([1 2 3 4 5] {:ricky "lucy", :fred "ethel"} (5 4 3 2 1) [1 2 3 4] {:fred "ethel"} (4 3 2 1))
;;;
(let
(vec (array 4 '(1 2 3 4))
asso (list '(fred "ethel"))
lst '(4 3 2 1))
(println (list
(append vec (array 1 '(5)))
(append asso (list '(ricky "lucy")))
(append '(5) lst)
; the original are intact
vec
asso
lst)))
;;; CloJure ;;;
; ;cycle produces an 'infinite' seq
; (take 15 (cycle [1 2 3 4]))
; => (1 2 3 4 1 2 3 4 1 2 3 4 1 2 3)
;;;
(println (array 15 (sequence 1 4)))
;;; CloJure ;;;
; (defn my-zipmap [keys vals]
; (loop
; [map {}
; ks (seq keys)
; vs (seq vals)]
; (if (and ks vs)
; (recur (assoc map (first ks) (first vs))
; (rest ks)
; (rest vs))
; map)))
;
; (my-zipmap [:a :b :c] [1 2 3])
; => (:b 2, :c 3, :a 1)
;;;
(println (map list '(a b c) '(1 2 3)))
;;; CloJure ;;;
; (re-seq #"[0-9]+" "abc123def345ghi567")
; => ["123" "345" "567"]
;
; (re-find #"([-+]?[0-9]+)/([0-9]+)" "22/7")
; => ["22/7" "22" "7"]
;;;
(println (find-all "[0-9]+" "abc123def345ghi567"))
(println (regex "([-+]?[0-9]+)/([0-9]+)" "22/7"))
;;; CloJure ;;;
;; Mutual recursion example
;
;; forward declaration
;(def even?)
;
;; define odd in terms of 0 or even
;(defn odd? [n]
; (if (zero? n)
; false
; (even? (dec n))))
;
;; define even? in terms of 0 or odd
;(defn even? [n]
; (if (zero? n)
; true
; (odd? (dec n))))
;
;(even? 3)
;=> false
;;;
(define (odd? n)
(if (zero? n)
nil
(even? (dec 'n))))
(define (even? n)
(if (zero? n)
true
(odd? (dec 'n))))
(println (even? 3)) ;=> nil
;;; Clojure ;;;
; (defn argcount
; ([] 0)
; ([x] 1)
; ([x y] 2)
; ([x y & more] (+ (argcount x y) (count more))))
;
; (argcount) => 0
; (argcount 1) => 1
; (argcount 1 2) => 2
; (argcount 1 2 3 4 5) => 5
;;;
(define (argcount)
(case (length (args))
(0 0)
(1 1)
(2 2)
(true (length (args)))))
;or (true (+ (argcount (args 0) (args 1)) (length (2 (args)))))
(println (argcount)) ; => 0
(println (argcount 1)) ; => 1
(println (argcount 1 2)) ; => 2
(println (argcount 1 2 3 4 5)) ; => 5
etc., etc.
No comments, except that I do not like the use of [] and {} in CloJure.
NewLISP seems to me easier, simpler and more consistent
Regards,
Bertrand
The crazy tester struck again ;)
That's very informative - there's nothing like seeing the two side by side to get a feeling for the fundamental style of the languages.
Quote from: "newBert"
Quote from: "Fanda"
My expectations were quite high, but there are some good things to look at ;-)
I believe that for example Clojure innovates more:
http://clojure.sourceforge.net/
Fanda
Some Clojure/NewLISP comparisons :
I do not quite understand, though: how can one compare a dialect of lisp, written on top of JVM, and allowing easy importation of Java libraries into the lisp code/framework --- and a dialect of lisp, written in C and allowing a very simple importation from any libraries in C ?!!
It's obvious that the second one is preferrable and will win by a huge margin, as Java is slow. Java is as slow as the "big Lisps" are, and even more in actual practice, see comparisons here:
http://www.flownet.com/gat/papers/lisp-java.pdf
and discussion of comparison tests here:
http://www.alh.net/newlisp/phpbb/viewtopic.php?p=12050&highlight=#12050
Especially in view of the fact that NewLISP exists on all platforms, or almost, is roughly 200kB, and can create fully standalone tiny programs without dependencies, so allowing cardinally new solutions (many tiny fully-featured standalones) in lots of cases?!
Why even talk about CJ, even if it might be an interesting toy to play with and compare how two different desiners took their decisions when developing these lisps..
Quote
I do not quite understand, though: how can one compare a dialect of lisp, written on top of JVM, and allowing easy importation of Java libraries into the lisp code/framework --- and a dialect of lisp, written in C and allowing a very simple importation from any libraries in C ?!!
It was just about a stylistic composition, just to see how we could make finally the same thing with NewLISP :)
It allowed me to use NewLISP functions which I did not still have to investigate so far ... I learnt new things on NewLISP by misleading me a little towards other horizons ;)
Now I know that NewLISP is not comparable to Clojure because I practised a little the latter. I prefer to check for myself rather than give credit to prejudices.
And then I like to test, compare, browse :D
Curiosity is not just a nasty fault.
I do the same thing with CLisp, Scheme, Python vs NewLISP and I discover full of interesting things and ideas.
Quote
Why even talk about CJ, even if it might be an interesting toy to play with and compare how two different desiners took their decisions when developing these lisps..
... just to highlight NewLISP ;)