Code Select
;Redefine all of NewLisp's boolean functions
(define (array?? x A B)(bool-if array? x A B))
(define (atom?? x A B)(bool-if atom? x A B))
(define (context?? x A B)(bool-if context? x A B))
(define (directory?? x A B)(bool-if directory? x A B))
(define (empty?? x A B)(bool-if empty? x A B))
(define (file?? x A B)(bool-if file? x A B))
(define (float?? x A B)(bool-if float? x A B))
(define (global?? x A B)(bool-if global? x A B))
(define (inf?? x A B)(bool-if inf? x A B))
(define (integer?? x A B)(bool-if integer? x A B))
(define (lambda?? x A B)(bool-if lambda? x A B))
(define (legal?? x A B)(bool-if legal? x A B))
(define (list?? x A B)(bool-if list? x A B))
(define (macro?? x A B)(bool-if macro? x A B))
(define (NaN?? x A B)(bool-if NaN? x A B))
(define (nil?? x A B)(bool-if nil? x A B))
(define (null?? x A B)(bool-if null? x A B))
(define (number?? x A B)(bool-if number? x A B))
(define (primitive?? x A B)(bool-if primitive? x A B))
(define (protected?? x A B)(bool-if protected? x A B))
(define (quote?? x A B)(bool-if quote? x A B))
(define (string?? x A B)(bool-if string? x A B))
(define (symbol?? x A B)(bool-if symbol? x A B))
(define (true?? x A B)(bool-if true? x A B))
(define (zero?? x A B)(bool-if zero? x A B))
(define (not?? x A B)(bool-if not x A B))
The booleans work out nicely from a naming standpoint in that all I had to do was add a second question mark on the end to make them distinct. A concern was expressed that treating booleans this way might make code unreadable or confusing to people from other LISPs. That may be so but one might say the same thing about implicit indexing. The point here is that any language has syntax quirks that have to be learned. There is no point in creating a language if it does not attempt to do one or more things differently to keep advancing better notions. In practice I have tried using these on a code module of mine and didn't find it the least bit confusing but I am admittedly a biased observer. It has the virtue of not forcing you to change your ways but allows the opportunity for those that are bent in that direction just as one is not forced to use implicit indexing but eventually discovers its awfully nice to have sometimes.
The make-pass method of nesting I find the most acceptable because it allows brevity for short nestings and the & on the end of the functions make it clear that something is going on. I do not see composing being used in the sense that Kaz does it because it does not have the simplicity of appearance that make-pass does. Kaz's method may be more traditional but it is too much trouble for short nestings, which is the typical case. I think people will just type (sin (cos x)) rather than go through all the extra trouble just to be traditional. The nestings have to be too deep to gain an advantage in typing. Now suppose we end up taking the make-pass approach and write (sin (cos x)) as (sin& cos& x), have we gained anything? The total characters typed was the same so we gained nothing in typing. Arguably we might say that it looks cleaner in that the parentheses seem more distracting than adding the two &. It takes a second level of nesting before we gain a definitive advantage. But if we rewrite the functions to nest as part of their nature then we can write (sin cos x) and gain advantage at the first nesting. In Paul Graham's Arc language one could write the statement as (sin:cos x) which allows us to type one less symbol than make-pass. But I think its even better to not have to type extra symbols at all. A change in syntax could be nice. What if we could write something like (sin cos | x) where the vertical line delineated nesting and a separation from the functions and the arguments. This would allow nesting to be visually distinct, it would require only one symbol for any level of nesting and give advantage at the first nesting. Of course only Lutz could make this happen unless he gives us reader macros to play with.