Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Jeremy Dunn

#1
Whither newLISP? /
August 26, 2009, 07:35:18 PM
Here are a couple of ideas you might try to reduce parentheses:



1. Give function names a special form such as all caps so that your translator can easily find them and have the function names double up as the left parenthesis.



2. Let the square brackets [ and ] act as a replacement for double parentheses. For instance, [ is the same as ((. This would enable clearing up long strings of parentheses. This is something that could be useful in any LISP.



Keep us informed of what you come up with!
#2
Whither newLISP? /
June 20, 2009, 01:30:44 PM
Quote from: "Elica"Ah, IMHO programming languages will die within the span of a human life. What I actually mean is that in the future people would communicate with machines (this includes "programming") in a non-verbal and non-language approach. I don't know the details yes (ha-ha!) but it could be something like direct brain-to-machine connection, so machines will be "wired" to us like external limbs...



So in a hundreds years the only programming languages one could see would be those in the Museums of Ancient Programming.


I am very skeptical of this. Do you really want someone drilling a hole in your head and making these very delicate connections at your risk given what we know of hospital goofups? And every time a correction or replacement is needed they have to grind and slice you some more? How much of your gray matter are you willing to have them dice open to make you super cyborg? Do you trust these microchips to sit in your head for 60 years and be completely unaffected by the surrounding biomaterial they are embedded in? Oh, and if you have the ability to directly interface with the outside world what is to stop some hacker from downloading some crap onto your chip that makes you have an epileptic seizure every 10 minutes or throw up continually? I think I will let you be the daring test case to lead the way for the rest of us.
#3
Whither newLISP? /
June 16, 2009, 07:46:31 PM
Quote from: "Kazimir Majorinc"And what do you think about these issues, Jeremy?


Well, I am not sure about Unixtechie's belief in using the principles of "natural" language. In natural language we do not make use of parentheses and cannot nest expressions more than a couple levels without getting lost. Larry Walll tried to use linguistic principles in PERL syntax but my impression of it is that it is absurdly topheavy with dozens of syntax rules that mere mortals like me cannot keep straight in my head without constantly referring to a crib sheet. The most effective languages seem to use our visual processing rather than linquistic. Most text editors now use color as there is a generally acknowledged belief that this aids in accuracy and clarity. I believe that code will take more advantage of the visual system and perhaps move into 3D isometric representations. I don't believe that this would just be a gimmick, our visual system starves for 3D and it is a powerful method of representation.



There needs to be more of a concerted effort to make software as language neutral as possible. Couldn't a system be developed to specify a language's syntax in terms of a neutral intermediary like Bachus-Naur notation? The translator would go from the language to Bachus-Naur and then translate that into code for the computer. This probably would not result in optimum performance but performance is not everything. There is a need to free the programmer from being forced to use languages that they don't want to. I see this as being a real roadblock to progress.



We LISPers I think have a love for higher levels of abstraction and that is the fuel for jokes about lispers. I may be wrong but I don't expect to see tremendous progress in new abstractions. From what I see most modern languages seem to reinvent abstractions that were already part of LISP years ago and then act like they were there first.



I think there is something to be said for the Mathematica approach. With a large vocabulary of functions Mathematica is getting big enough to have some claim at being an actual language. At least the user of Mathematica does not have to bother becoming a math major to have very complex algorithms in his toolbox. I know Lutz in one posting said that he didn't want to see NewLISP become Mathematica but would that really be so awful?



There seems to be this major divide between object oriented and functional programming with people drawing a sharp line in the sand. As I mentioned before I suspect we do not all think the same. Experiments have been conducted that have proven conclusively that some people hear the same sequence of audio tones differently. I think it likely that some people are hardwired for object oriented and others are functional. There may be no way to get around this other than having multiple languages. It would be nice if someone could test this hypothesis to see if it is true. The language that is mathematically optimal is likely to be incapable of being grasped by ordinary people.
#4
This post is simply my amateur musings about the future of programming and asking your thoughts. One of things I was hoping would improve would be that the environment would allow you to write code in any language that you wanted so that usefullness would determine what languages would be used. Every application I own locks me into limited languages instead of giving me freedom. Why is this and what can be done about it?



Is Mathematica the future? It is up to 2700+ functions and counting. Is this a new idea to simply grow the language to the point where it starts eating everything around it? There is something to be said for this approach and I think we can agree that Mathematica is a pretty cool product. Is Mathematica going to be like China and quietly grow in the background until everyone realizes too late that they can't stop it?



What is the syntax of the future? Can much more be done? We here obviously prefer LISP syntax because of (to us) its simplicity. Why has no one done objective testing of different languages to determine which ones are most likely to be difficult to learn or apply properly? Why is there nothing but idiotic flaming wars on the subject? You would think the software industry would investigate this more with billions of manhours of work on the line. Is it like music where one person likes jazz and another prefers classical with no objective standard? Do two people prefer different syntaxes because their brains actually perceive differently and the different syntaxes are ideal to that type of thought? Can we show that a given language is better even though another one is the individuals favorite pick?



What trends to do you see? What do you think the future of programming will be?
#5
Whither newLISP? / Handy looping function
March 02, 2009, 06:21:41 PM
First the code:



(define (dolist-choice n A B C)
  (if (list? n)
      (setq n (dec (length n))))
  (eval (if C
           (if (zero? $idx) A (= $idx n) B C)
           (if (member $idx '(0 n)) A B)
        )
  )
)


This is a special branching function designed for being used within a DOLIST loop. The function can take up to 4 arguments. The first argument is either the list we are looping through or the integer value of the largest index of the list. If we supply a list then the largest index is found for us. A, B, C are each one or more program statements to be executed. If we have all three then if $idx is 0 A is executed else if $idx is the last index B is executed else if $idx is any other index then C is executed. If only A and B are present then if $idx is the first or last index A is executed else B is. I find this one very handy for the toolbox and it can really clean things up - enjoy!
#6
newLISP newS /
February 24, 2009, 07:11:10 PM
Kaz, you're so quick on the draw! Thanks for the fast typing. Yes, I agree that overloading can get crazy sometimes but it is nice to have common operations that are very closely related under one roof. In multiplication for instance, factorials, product of elements of vector all share a commonality that I don't think is too off base. Most of the suggestions involve only the use of a single argument which visually should be different enough to alert you that something is going on. But everyone has their own style and preference.
#7
newLISP newS / Add and Mul
February 23, 2009, 07:43:54 PM
Some possible enhancements to add and mul:



ADD



When supplied with a single vector return the sum of all of the elements

(add (1 2 3 4)) -> 10



When supplied with an integer and T return the continued sum of the digits of the integer.

(add 7898 T) -> 7+8+9+8 = 32  3+2 = 5



MUL



When supplied with a single positive integer return the factorial of that integer.

(mul 3) -> 6



When supplied with a single negative integer return the double-factorial of the absolute value of the integer.

(mul -7) -> 105



When supplied with a single vector return the product of the elements.

(mul (1 2 3 4)) -> 24



When supplied with an integer and T the continued product of the integer digits is returned.

(mul 463 T) -> 4*6*3 = 72 7*2 = 14
#8
newLISP newS / dup
February 06, 2009, 11:51:07 AM
Lutz,



Why not make DUP take additional multiplier arguments so that

(dup x n1 n2 n3 ....) is the same as (dup (dup (dup x n1) n2) n2)? That way if I want a 3x3 matrix of zeros I can write (dup 0 3 3) instead of (dup (dup 0 3) 3).
#9
newLISP newS /
December 31, 2008, 11:52:45 AM
Sorry Xytotron, there is a typo in your typos! it should be



otherwis -> otherwise



not



otherwis -> otherwize
#10
Anything else we might add? /
December 24, 2008, 01:08:21 PM
Corm,



I like what you posted because it is along the lines of what I was trying to do only developed further. There was also this approach that was posted on the Arc forum that deserves looking at too.



http://www.lisperati.com/arc/regex.html">http://www.lisperati.com/arc/regex.html



One of the criticisms that I have of what you posted is that I don't like the idea of using similar function names for operations that are not the same. When looking at code casually you should always know that something is going on. If I glance at AND and OR in the above code I cannot immediately see that we are actually doing something different. Far better to have something like RGX-AND and RGX-OR or perhaps $AND and $OR. Similarly * and + will be seen as being multiplication and addition. I understand the desire to stay with standard symbols but a function name should be a name that is distinct from other names. If we try to carry over too much that is old then that belies our belief that it needs to be changed.
#11
Anything else we might add? / Folding and Such
December 21, 2008, 10:19:29 AM
In our code snippets is a little function for doing alternating addition and subtraction that enables you to write an expression like a+b-c+d.. as



(+- a b c d ...)



which is just a shorthand for (+ (- (+ a b) c) d). One immediately thinks about generalizing this for any two functions A and B to get (A (B (A a b) c) d). But wouldn't it be even more general to allow any number of functions so that we could have expressions like (B (A (C (B (A a b) c) d) e) f)? I then considered that there are really four cases that we might have that are variations of whether we reverse the order of arguments or of the direction we apply the operators in. They are



(B (A (C (B (A a b) d) e) f) g)

(B f (A e (C d (B c (A a b)))))

(A (B (C (A (B a b) d) e) f) g)

(A f (B e (C d (A c (B a b)))))



So I came up with a FOLD function that does all this. One first lists the operators, a comma and then the arguments. The first function is quoted if you want to reverse the arguments and the second function is quoted if you want to reverse the operator order. For instance, the above four cases would be written as



(fold A B C , a b c d e f g)

(fold 'A B C , a b c d e f g)

(fold A 'B C , a b c d e f g)

(fold 'A 'B C , a b c d e f g)



Here is the code for doing this


 
(define-macro (fold)
 (local
  (ind funcs funcL vars varL start flg1 flg2 f)
  (setq ind   (find , (map eval (args)))  ;find where the colon is
        funcs (0 ind (args))      ;get a list of the functions
        vars  ((inc ind)(args))   ;get a list of the arguments to process
        funcL (length funcs)      ;get the number of functions
        varL  (length vars)       ;get the number of variables
  )
  ;..if the argument is a list then set vars to the list and process that
  (if (list? (args -1))
      (setq vars (args -1))
  )
  (setq flg1 (symbol? (eval (funcs 0))))  ;set flag to reverse argument order
  (setq flg2 (symbol? (eval (funcs 1))))  ;set flag to reverse operator order
  (when (>= varL 3) ;we must have at least 3 arguments to process
    (setq start (eval (list (eval (funcs 0))
                            (vars 0)
                            (vars 1))))  ;do the first calculation
    ;loop thru the rest of the arguments
    (dolist (a (2 vars))
      (setq f     (funcs (dec (% (if flg2 (- varL $idx) $idx) funcL)))
            start (eval (push (eval f) (if flg1 (list a start)(list start a))))
      ))
    start
  )))


So now we could write (+- a b c d e) or the more general (fold add sub , a b c d e). Another case of repetitive operations are continued fractions. For instance, the simplest continued fraction is that for the golden ratio (1.618) which can be represented now as (fold 'add 'div , 1 1 1 1 1 1 1). Of course you need a lot of ones to converge on phi. You must remember that all functions must be able to take a minimum of two arguments.



I believe this function is sufficiently general to be a handy one in the toolbox. A caveat: I wanted this function to be able to take a list of arguments like

(fold 'add 'div , (list 1 1 1 1 1)) as well but my section of code for doing that doesn't seem to work. Can anyone tell me what I did wrong? I have a devil of a time knowing when to EVAL and when not to to get things to process correctly :-)
#12
newLISP newS / $idx
July 29, 2008, 05:59:26 PM
Now that $idx has been added to all looping structures could it be useful to also have two more? I wanted to suggest having two more locals named $prev and $next that would be defined as



$prev = $idx - 1

$next = $idx + 1



Often in loops it is the previous or next element that I am interested in also, this would provide a convenient way to get to them rather incrementing or decrementing explicitly.
#13
Anything else we might add? / Cyclic Permutation
July 25, 2008, 10:52:03 AM
Greetings all lispers! I have run across a situation that I am sure some of you have too where one has an expression that involves the repetition of a form that has a cyclic permutation of arguments. For instance, suppose we have



(add (mul (sin A)(sin B))(mul (sin B)(sin C))(mul (sin C)(sin A)))



It is clear we are just cycling through our arguments and wasting a lot of verbiage. I would like to create a function that I will call CYCLIC that enables me to write this something like



(apply add (cyclic '(A B C) '(mul (sin *1)(sin *2))))



In the previous we have a list of the actual values '(A B C) that we want to cycle through and then we have the expression to substitute into. The expression has filler variables *1, *2 ...  I am not sure of how to go about this. Should my expression be a quoted symbol or would it be better to have it as a string to chop apart and evaluate? It is the creation of dummy arguments *1, *2 etc that is puzzling me.
#14
Anything else we might add? /
July 18, 2008, 12:17:58 PM
My vote goes for nif. In most languages the negations of or and and are nor and nand, the n is added on the front.
#15
Anything else we might add? /
February 25, 2008, 07:29:04 PM
Elica,



I don't know that I am breaking the LISP syntax at all unless you want to narrow it down. I have a function name "nest" that is followed by arguments and is enclosed in parentheses, I believe that is LISP syntax. The fact that I treat the argument : in a special manner may be unlispy to some but the overall form follows the pattern. Lutz also does this in the DEFINE function where he lets the comma be just another argument that is treated like I am using my colon. "Unlispy" forms are worth considering if they deal with often occuring structures in a more general way or provide some kind of conceptual clarity. I think LISP programmers are like abstract artists looking for that ultimate zen form of things. Other programmers just want to write code, lispers want to see the Om :)