Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - unixtechie

#16
http://www.livejournal.com">http://www.livejournal.com - Livejournal - plays a unique role in Russian-language blogosphere. As compared to large US blog services, where majority of users are teenagers (e.g. the US section of LJ averaged at 19 some time ago), the Russian LJ contains people of all ages and most popular book writers, journalists, even politicians etc. etc. consider it necessary to have presence there: this gives access to widest real audience from all parts of the country and to the immigrant community in one swoop.



Where does newlisp come here? - easy. One of the bloggers used newlisp to do estimations of election vote counts (and being someone with somewhat extremist anti-russian views he, naturally, attempts to interpret a very vague statistic as a "proof" of vote rigging, as his likes are prone to, I'd say).



The interesting part of it is that his _technical_ (i.e. non-political) post on how he constructed a simple web pages crawler which downloads 2 levels of referred pages and then cuts out necessary results to export into a processing app _got 388 responses_ with all kinds of comparisons (to 2 versions of perl scripts, shell, python, and even a version in Haskell) and technical discussion.



His post at

http://fritzmorgen.livejournal.com/248335.html?format=light">http://fritzmorgen.livejournal.com/2483 ... rmat=light">http://fritzmorgen.livejournal.com/248335.html?format=light

describes his crawler in detail, giving a piece of code commented and explained line by line and so providing a sort of intro to newlisp, which stirred a surprising degree of enthusiasm.



This is not the first exposure of newlisp in Russian LiveJournal, but most of the mentioning so far ran into sniggering.

So probably what needs to be done for "spreading the word" is just small concrete snippets of code which are thorougly explained. When people see how well-working and minimalistic solutions can be created, they do not fail to get a favourable view of the language. It's probably the generalizations and talk "about" that mostly repel readers and make them  throw stupido generalities back



The post is in Russian. The complete snippet can be seen here:

http://metasatanism.ru/FILES/Linux/election.lsp">http://metasatanism.ru/FILES/Linux/election.lsp  /* Linux */

http://metasatanism.ru/FILES/election.lsp">http://metasatanism.ru/FILES/election.lsp /* Windows */



Alternatives:

http://hpaste.org/fastcgi/hpaste.fcgi/view?id=11008">http://hpaste.org/fastcgi/hpaste.fcgi/view?id=11008  /* haskell */

http://vozutat.livejournal.com/11098.html">http://vozutat.livejournal.com/11098.html /* python */

http://what-me.livejournal.com/6815.html">http://what-me.livejournal.com/6815.html   /* a sort of perl */

http://nponeccop.livejournal.com/152013.html?thread=776141&format=light#t776141">http://nponeccop.livejournal.com/152013 ... ht#t776141">http://nponeccop.livejournal.com/152013.html?thread=776141&format=light#t776141 /* Haskell and Bash in the comment below */
#17
Whither newLISP? / C from newlisp?
October 07, 2009, 03:02:15 AM
I saw some quoted discussion in Kazimir's blog, and there's something maybe relevant to the wider topic of how to use C from scripting languages in general, including newlisp.



There is an excellent open-source project called "tcc", a tiny C compiler. The binary is slightly over 100k (123 on my platform).

It understands all of ANSI C plus some extensions; it can compile libraries or standalone execs, and it can be used for "scripting", if the first line in the file is

a shebang invocation of #!/path/to/tcc (plus some options, e.g. lib includes etc)

TCC also contains an assembler.



The main point is that it runs roughly an order of magnitude faster that GCC (the ratio was 9 in test-compilation of the source of a links web browser, if I am not mistaken).



It also compiles on Windows, i.e. it's crossplatform



HOME PAGE: http://bellard.org/tcc/">http://bellard.org/tcc/



--------------

THEREFORE there are basically several options if one wants to use C or even assembler from his scripting language.



(a) write your C, then invoke "tcc - run" piping output back into your script.

"tcc -run ......" will compile it on the fly and not create an "a.out".

Alternatively, use a simple wrapper that checks if your inline was compiled already and reuses the generated binary, saved as a small file, not to repeat it in subsequent runs.



(b) write your C, then compile it with tcc as a library; use newlisp built-in fuction to load the tiny lib you created on the fly and talk with it using newlisp facilities



(c) TCC itself can be compiled as a static libtcc.a

Its APIs are outlined in its header file. It is possible, generally speaking, to produce an extended version of newlisp with this lib compiled in (just the way a library that implements httpd or some hashing is compiled into it).

I do not believe it is the best way, though, because of the need to learn a whole bunch of API functions and because, while not fattening the newlisp binary that much, such an add-on would prevent newlisp from remaining a standalone exec, as it will tie it to some other files (e.g. headers ), i.e. will require a "system installation". This lack of dependencies is one feature that makes newlisp drastically different (and better than) most scripting languages, in my view.



The amount of wrapping of the C code to be used with such a tiny fast one-the-fly compiler should be negligible, and I would say in most cases of practical use the need to run an extra process for the C sections will not affect the usability of the script.



Two more major points:

- tcc is so small, it generates straightforward stuff in microseconds (5-8 microseconds for something like the Fibonacci test prog). One CAN use that for DYNAMIC generation of C code from your script, not only for pre-compilation of some static parts of a program

- tcc can help in using external libraries which are difficult to use from newlisp itself, directly. One can write a simple wrapper in a few lines which will present the result of an invocation of library functions in the form convenient for passing over to newlisp (e.g. as a string or some list, whatever).





----

There is another project (I'll check the name and add it), which fakes the same scripting approach. The "script" in C is in fact passed to the full GCC compiler on the first invocation, and the compiled a.out is called on subsequent ones.

This of course is (a) slower) and (b) much heavier on IO at least.



So  I believe the TCC road - using a blazingly fast C compiler which can of course link with any existing libraries in C etc. etc. - to write convenient wrappers whenever newlisp operators are at an end and/or to write pseudo-inlined sections of code in C or assembler, which then can be used from the scripting language -- is practical and will cover most of the real-life uses.



-----

P.S. Python people already went that way, as a matter of fact:

http://www.cs.tut.fi/~ask/cinpy/">http://www.cs.tut.fi/~ask/cinpy/

"Cinpy is a Python library that allows you to implement functions with C in Python modules. The functions are compiled with tcc (Tiny C Compiler) in runtime. The results are made callable in Python through the ctypes library."
#18
Whither newLISP? / Pico Lisp and Alexander Burger
October 05, 2009, 12:26:34 AM
There is another answer, given by Alexander Burger, the creator of another tiny fast micro-lisp, called "pico-lisp" (or picolisp); its home page is at  http://www.software-lab.de">http://www.software-lab.de (click on "download" )



In his paper called

" Pico Lisp A Radical Approach to Application Development"

(pdf is here:  http://www.software-lab.de/radical.pdf">http://www.software-lab.de/radical.pdf )

Alexander looks at a number of considerations and re-thinks them. Number one there is "Lisp needs a compiler"



Have a look at it, there's something in that, I think
#19
newLISP newS / Git is great
August 18, 2009, 11:15:52 AM
Oh, Git is great.

It's fast too. And sort of generic - one can think of many uses, it behaves as a kind of "filesystem" you can control with command-line utilities.

At one time I experimented with using Git as my back-end versioned storage for a p2p blogs system. I still believe it can be used like that and feed a moderately-active web site (tens of requests per second, at least).



Actually, I've seen a couple of other people coding web apps with Git as a backend, in place of a database or a filesystem storage.
#20
newLISP newS / Thanks
August 16, 2009, 11:04:50 PM
Thanks.



Even if one does not hack the source, Git VC may help in tracking other, more user-side changes, such as changes in documentation between the versions, to track newlisp operator options/modifications etc.
#21
The really interesting topic today that is related to GPL is the inclusion of Mono into a Linux distribution (Ubuntu). Richard Stallman warned of  Greeks bearing gifts, and is being bashed by Microsofties and various accompanying softbrains



In fact all issues with GPL are political in nature.



1. Copyright and the terrorism of "intellectual property" originate from Corporations becoming transnational and pursuing deindustrialization of the US/metropoly while simultaneously outsourcing production to other places with cheap slave-like labour.

If classical capitalism kept production of say English textiles in England, murdering Indian weavers with their finished product exports, today even manufacture has been outsourced. This means the only way to claim their huge share of profits is for Corporations to assert they have nebulous "intellectual rights" to what has long been handed over to other peoples.

This may sound rather harsh, but other views of this issue (more common in mass media) contain a large dose of propaganda: take an example of music industry, which gobbles something like 80% of profits, but purports to speak in the name of "starving authors".



2. GPL is first and foremost a political tool - work of a genius - which allowed something unprecedented, a construction of a non-parasitic, non-capitalist model of creation and production, on the lines of the ancient "community"  or 'tribal" efforts, the model of  mutually helping and mutually benefiting individuals. If I save tens of thousands with, say, GCC toolchain plus OS and utilities, I will  give back something of my efforts.



3. The danger of GPL is well-recognized by the parasites, the Corporations. The essence of corporate world order, so to say, is to create economic parasitism and deflection of profits from the natural, technological flow of activities. Therefore some time ago an "Open Source" movement  spurted from unknown sources to add non-GPL licenses which would neutralize exactly the hated "viral" nature of GPL. Many do not recognize to this day that the idea behind that was to kill GPL, not to expand the falsely named "Open Source" movement.



4. Slightly later this effort at interception and corporatization of GPL continued with corporations jumping on the bandwagon and creating 'Linux Foundations", or purchasing some brands and whole distros (e.g. Suse by the hated Novell, which immediately began to cooperate with Microsoft)



Ubuntu  belongs to this trend. It was set up as a parasitic add-on to create a MS-like GUI environment on top of the real workhorse distribution, the community-created Debian. It was set up by a shady character, some Mark Sh., an Israeli who suspiciously got so rich in South Africa that he could treat himself to flying as a space tourist.

Then suddenly he turned to parasiting on Debian, and very,very quickly Ubuntu (with the insanely infantile "Hardy Herons" and "Dapper Drakes")  became virtually the main  distro for the Mass Media and the lemmings. Who paid for the ads campaign? Why?

I have no idea what real agenda is pursued by people like that - money or something like putting backdoors into precompiled kernels or simply subverting Linux - I do not know. But this is not good, real, honest work in my eyes.



5. Now Ubuntu szar, this very Mark Shattleworth, if I got the story correctly, blessed Mono for bundling with their Linux distro,  and for the sake of truly unnecessary "personal productivity" i.e. trifle applications. And the lemmings squeak indignantly when RMS warns of the possible trojaning of Linux with MS technologies, as there is a danger of MS later claiming some patent rights or imposing restrictions or incompatibilities.



6. Another huge topic is the subversion of personal computing and taking away ownership of data from an individual user. A whole strategy exists there, and MS is pushing it with all its might. Again, the Open world stands in natural opposition that stems from its technological nature, again RMS warned of "surreptitious uses of computers", and again he is not heard.



7. So basically the question of GPL comes to the two fundamental views of life and work: that of robbing or swindling  of everyone by everyone, which is known under the euphemism of "market" today (for the extremists which came after WWII everything is a market, and the generations of westerners got their brainwashing from early childhood) -- or cooperative work of members of community.

The genius of GPL creators was that they managed to engender the second from within the first, and protect their work from greed and dishonesty by wisely turning the legal mechanisms of the parasites against parasites themselves. And this sparkle of humanity is what must be cherished in the increasingly oppressive world of corporate jack-booting and blackmail.



These are the issues at stake with GPL.
#22
Whither newLISP? /
July 01, 2009, 10:41:18 PM
Lutz, thanks for the explanations.



The "lazy" example from Perl will produce ONE, first match and stop. A second rerunning will produce the SECOND match and stop. And so on..



To get the "lazy" behaviour, the modifications of "find" and "regex" you suggested may be exactly what is needed (if internally the offset is remembered as some kind of pointer and not recalculated every time, which I believe is the case)





Why such "lazy" behaviour may be important?



This is the task I was thinking about when I came to consider "lazy" processing of data buffered in strings.



There are plenty of "relational database" systems, from the smallest ones like sqlite to behemoths. They however are optimized for writing data by their very design (roughly, equivalent to "append whole lines to an existing file"), not for reading it efficiently.

The reason for the inefficiency is that reading a relational db drags along the whole of table of data (and often even scans through all of it), while the query in fact may affect only a few columns. This eats up memory and slows the thing down.



So more recently "column-oriented" databases were "invented". There are free ones and practical, too: e.g. one research column-oriented db that is comparable to MySQL, which is  a university project  and is free software. Direct comparison to MySQL using TPC-H benchmarks shows something like an order of magnitude speedup.



Existing relational dbs cannot be adjusted to operating on columns - the first idea would be to create tables exactly one or two column wide, and then adjust the necessary queries. Yes, this would eliminate going through irrelevant (for a given query)  table data, but internal processing tuned to wide tables makes these operations very slow. I tried it - was disappointed - and then found a paper on the Net in which well-known university people did exactly the same tests, only they were thorough and finished them and published the completed results, with the same disappointing conclusion.



So database vendors began to produce "column-oriented back-engines", as is the case with MySQL, for example. You install that add-on, then create tables of a new type. This still presents the customary SQL interface to the user/programmer, but internally operates on columns, sometimes cutting drastically the used memory and speed.



The question is - can column operations on data be modelled with shell, awk, perl or in newlisp? - you bet they can, and all those "selects" and "joins" are not conceptually that difficult. They are just a script per each operation, and not very large one. I think it would be very interesting to see how such processing compares to MySQL-relational and to specialized column-oriented engines. I expect the scripting solution to give speedup compared to the traditional relational processing on TPC-H benchmark data.





But when you deal with tens of megabytes of data in one column-file, which lives inside a directory on a regular unix filesystem, and wish to process it -- depending on the find or match in one column you'd pick a line from another column-file -- you meet with a situation when you cannot slurp the whole column-file into memory, not enough memory.

You do not want to process it while reading directly from the file - this will be too  slow.

Then one might try to buffer the data, reading it from the disk in large enough chunks and processing in-memory as buffered strings.

Newlisp does not do efficient built-in buffering for I/O (in Perl, for example, they do, but then lie to the user, which is presented with language operators as if doint line-by-line processing etc.etc.). I'd have to set up buffering manually.



And that was when I came to the next problem - if processing is "greedy", I still can do such buffered processing, but can I  "ping-pong" to other columns easily to do sth with them (just pick up a line of the same number in the simplest case)  at each match?



Of course, there may be ways to code around the limitation with greedy processing of chunks of a certain standard size, but I believe that "lazy" technique, one result per one request, comes naturally in the context of such a script.



Of course there may be other cases when "lazy" processing of strings or streams would come handy.
#23
Whither newLISP? / to Kazimir, Lutz
July 01, 2009, 07:41:52 AM
Lutz: if I got it right, there is no way to call "replace", get the first match, then call the same invocation the second time and get the second match etc. - i.e. "replace" will provide ALL of the matches in a list, not one by one on demand. As far as I understand it, I'd need "find" and subindexing" to obtain explicitly my matches one after another, and this is very slow.



Kazimir: no, efficiency.

I tried to search with "find" (or regex or sth, I tried several operators), then get the position - and next time do "find" (or sth) on a subindexed string, trying to ensure no copying is done.

This is terribly unefficient.  In place of a stream, I tested it on a large enough file (looking for "species" in the Darwin's book as the stop word), and this was the slowest version compared even to "search" on the file or compared to obtaining greedily a list of all matches and later dealing with it.

I'll try to find that snippet (or redo the test) later.



This is how one could do this lazy interation over strings with REGEXPs in Perl:
PERL CODE (from documentation):
In a coding region, codons are 3-letter sequences, so we can think of the DNA snippet as a sequence of 3-letter records.
    # expanded, this is "ATC GTT GAA TGC AAA TGA CAT GAC"
    $dna = "ATCGTTGAATGCAAATGACATGAC";
...............................
The solution is to use G to anchor the match to the codon alignment:
    while ($dna =~ /G(www)*?TGA/g) {
        print "Got a TGA stop codon at position ", pos $dna, "n";
    }

I.e. the "ancor" G starts new lookup from the place where the last one stopped, and "pos()" function reports the position (and can be assigned to to reset it)



Instead of "while" one could issue the match line once, get the first match, then a second time etc.

This seems different from the built-in processing of operators like "replace" for strings in newlisp - that one gobbles all up and produces a list of all matches, if I understand it right.



It might be PCRE library does provide all of that under newlisp, I do not know yet.



But the idea is that "lazy" processing of streams or strings is very convenient if you wish to do coroutines (ping-ponging processing btw two functions), or if your input is too large to fit into memory at once, so you set up in-memory buffering of pats of it (and hope to increase speed with this technique), or if you are dealing with a stream that is  "infitine" in practical terms.
#24
Programming language operators  are as a rule  very inconsistent. Some return meaningful values, others exist for "side effects", some are greedy, others are lazy or can be made lazy etc. etc.



Functionally aware languages,. such as lisps and newlisp in particular are great, they have nice uniformity.



However there is one task I do not seem to be able to solve easily. This is lazy iteration over strings



That is, I want for a great number of tasks to have ability to do with strings in memory exactly the same thing I can do with newlisp "search" and files: that is lazy processing.

Search gives me some result and advances the offset in the file (which I can read and use later, if I need it). The next invocation of the same search, say in a loop, will give me the second match and I can pick its offset.

The file can be arbitrarily large, but the operator is not "greedy", as it does not try to process all of it, it is "lazy", it yields result only wneh asked to.



I seem not to be able to do easily and efficiently the same with strings in memory. E.g. if I read that file (or a part of it) into memory, I do not know how to iterate over the string in such a way, that a second invocation of the same matching operator would yield the second match and advance offset, which I'd be able to pick.



My attempts at constructing such behaviour from existing newlisp primitives were clumsy and terribly inefficient (e.g. there is an operator that will give only the first find, but then truncating the string will be very inefficient etc.)



Am I missing something? Is there a way to do efficient processing of largish strings in memory a la files, in the lazy manner?
#25
err.. if you stop  keeping it all as lists, but create a hash with the first member as its key and the rest as a list it refers to, you could use the fact that red-black trees used for hashes in newlisp are implicitly ordered.



Then iterating over them will give you a numerically growing sequence - and so you won't need to go through all your data. You will just compare until the elements become bigger than the given value (and/or compare it to the given value plus or minus the "maxdeviation"), then stop, as you got your solution.



This should make this snippet faster.
#26
Whither newLISP? /
June 14, 2009, 12:03:49 PM
.. and by the way, the deep psychological reason - need to make languages as natural (i.e. close to human language) as possible - is the cause of my suggestion that documentation of programming languages be rewritten WITH NATURAL PHRASES.



"Old style" (think of 1970s, the tradition is well alive today) when instead of giving a practical guidance to a programmer, the manuals would introduce a new notation, then describe the language operators in that notation:
Quote "Terms and definitions" - "every activation has an activation point, an activation period and activation end"



Note 2: Semantically, the binding relation might be imagined to be materialized in some entity, the binding. Such a binding entity is constructed at run time and destroyed later, or might have indefinite extent.



A defining form is a toplevel special form (see §12.3) that establishes a binding between name and an object which is the result of handling the arguments according to the semantics implied by defining-form-name; it is a violation if a defining form is not a toplevel form.


(extracted from some LISP documentation)

THIS is what makes the language "not widely accepted", not any imagined technical flaws. A human must INTERNALLY TRANSLATE the crap into PRACTICAL terms and instructions, and this work is excruciatingly painful.



The newer Internet generation of 1990s came up with an immensely better approach to documentation - with practical examples on the first place, and generalizations second, and rid their heads of the need to retranslate the stuff from governese into human.



However, that is not enough. Each and every operator must be given A PHRASE in a human language, which it implements;

Not "cons exp-1 exp-2" -- but

CONS(truct) list_member1 list_member2 ---- return constructed list (member1 member2)

not "find str str" -- but "find this_string inside_this_string" --- return offset_number



and so on



The reasons are psychological - ZERO internal translation must be done by the user, he should get a phrase that he can murmur to himself while "speaking" the operator during programming.



Only by being natural to thinking can languages succeed, because capacity of the human mind, especially "the short-term memory" capacity is limited - and disposing of the unnecessary load on thinking we can advance it virtually at no cost.
#27
The most successful programming language or several will be based on innate patterns of thinking. This is not speculative,  those can be observed and picked up from the human languages.



Let me explain. Functional programming (mathematics it is based on being completely irrelevant in the context of success among human practitioners) uses the same mechanism, which is built into human languages:



A man killed a dog.

A middle-aged man killed a pitbull of unknown origin.

A middle-aged man, who was walking in the park at midday killed a raging pitbul of unknown origin which previously attacked -- and so on.



The style of functional programming uses substitution for some entities, expanding them from "one word" to "a phrase, the evaluation of which provides that word".



Compare this to "procedural languages". they cannot insert entities into entities, and therefore a programmer  is forced to speak with a series of simplest sentences:



There was a man. A man was middle-aged. The man was walking. Walking was done in the park. ..................



Each of these simplest statements in terms of programming brings to life an intermediary variable, then the next statement acts on the produced variable, producing its own intermediary thingy and so on.



Ability to insert phrases into phrases, which lisp and functional languages are so good at, RIDS your brain of preoccupation with juggling all those intermediaries, and so immediately makes you think easier and better.



That is not all. We also have "object-oriented" semantics, but that one is so unnatural and outrageous, I'd be afraid to speak of it myself -- and refer you to the (unnecessarily long, but) good essay "Execution in the Kingdom of Nouns". Do read it:

http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html">http://steve-yegge.blogspot.com/2006/03 ... nouns.html">http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html



The language of the future makes thinking easier BY WORKING ACCURATELY with built-in patterning humans are good at (as can be observed from the structure of "real" human languages), and BY STOPPING overburdening of the humans with housekeeping and intermediate entities, which appear as a result of that language crude design.



Not only the built-in language patterns matter - the micro-actions one takes during the act of programming, too.

"Literate programming", which is a system to process macros CREATED IN A HUMAN LANGUAGE - literate programming in essence is programming IN PSEUDOCODE, familiar to anyone who studied CS and read humanly understandable explanations -- is a leading approach in such meta-management aids.



Most programmers use programming structure - like subs/procedures - to help their brains "chunk" and process the stuff. Literate programming unloads this organizational level  from the code and splits housekeeping and coding, again raising the "brainpower" because no management tasks need to be tracked at tne moment of programming any more.
#28
newLISP newS / Hickup on Linux
June 11, 2009, 10:45:19 AM
Compilation of 10.0.8 on Slackware 12
"make linux_utf8" (or make -f makefile_linux_utf8)
fails with the default invocation in this makefile at the linking stage.



bash-3.1$ make -f makefile_linux_utf8
gcc -m32 -Wall -pedantic -Wno-strict-aliasing -Wno-long-long -c -O2 -g -DREADLINE -DSUPPORT_UTF8 -DLINUX newlisp.c
.......................
....................
gcc -m32 -Wall -pedantic -Wno-strict-aliasing -Wno-long-long -c -O2 -g -DREADLINE -DSUPPORT_UTF8 -DLINUX pcre.c
gcc newlisp.o nl-symbol.o nl-math.o nl-list.o nl-liststr.o nl-string.o nl-filesys.o nl-sock.o nl-import.o nl-xml.o nl-web.o nl-matrix.o nl-debug.o nl-utf8.o pcre.o -m32 -g -lm -ldl -lreadline -o newlisp
/usr/lib/gcc/i486-slackware-linux/4.2.3/../../../libreadline.so: undefined reference to `PC'
/usr/lib/gcc/i486-slackware-linux/4.2.3/../../../libreadline.so: undefined reference to `tgetflag'
/usr/lib/gcc/i486-slackware-linux/4.2.3/../../../libreadline.so: undefined reference to `tgetent'
/usr/lib/gcc/i486-slackware-linux/4.2.3/../../../libreadline.so: undefined reference to `UP'
/usr/lib/gcc/i486-slackware-linux/4.2.3/../../../libreadline.so: undefined reference to `tputs'
/usr/lib/gcc/i486-slackware-linux/4.2.3/../../../libreadline.so: undefined reference to `tgoto'
/usr/lib/gcc/i486-slackware-linux/4.2.3/../../../libreadline.so: undefined reference to `tgetnum'
/usr/lib/gcc/i486-slackware-linux/4.2.3/../../../libreadline.so: undefined reference to `BC'
/usr/lib/gcc/i486-slackware-linux/4.2.3/../../../libreadline.so: undefined reference to `tgetstr'
collect2: ld returned 1 exit status
make: *** [default] Error 1

However commenting the default line for the target and uncommenting the one with "-lncurses" produces a success.

default: $(OBJS)
#       $(CC) $(OBJS) -m32 -g -lm -ldl -lreadline -o newlisp
#       $(CC) $(OBJS) -m32 -g -lm -ldl -lreadline -ltermcap -o newlisp
        $(CC) $(OBJS) -m32 -g -lm -ldl -lreadline -lncurses -o newlisp
#       $(CC) $(OBJS) -m32 -g -lm -ldl -o newlisp
        strip newlisp

It may be other Linux distributions were OK with the default, or the reason might be in something else.



P.S. It's probably not worth it for newlisp (as Linux is Linux and newlisp is pretty generic), but testing  several different distributions can be done via LiveCDs. A distro on a LiveCD (a veru popular thingy recently) loads into memory and plays from memory and the CD. This allows one to mount a partition with the software and test without polluting a hard drive with a full-scale installation.

This may be obvious, of course, and not qualify as a real tip. Then just ignore it.
#29
and is it worth announcing it on http://www.freshmeat.net">www.freshmeat.net?
#30
Usually short, very varied, some may be useful: 13  newlisp scripts submitted by people to http://snippets.dzone.com">http://snippets.dzone.com

http://snippets.dzone.com/tag/newlisp

Besides using the "newlisp" tag,  the site can be searched with

http://www.google.com/search?hl=en&as_q=newlisp&as_epq=&as_oq=&as_eq=&num=100&lr=&
as_filetype=&ft=i&as_sitesearch=snippets.dzone.com&as_qdr=all&
as_rights=&as_occt=any&cr=&as_nlo=&as_nhi=&safe=images

(restore the previous as one line without blanks)

This search will provide more newlisp-related material from that site