Why Lisp FailedThis article investigates why the lisp computer language is no longer widely used. Long ago, this language was at the forefront of computer science research, and in particular artificial intelligence research. Now, it is used rarely, if at all. The reason for this failure is not age. Other languages of similar heritage are still widely used. Some "old" languages are FORTRAN, COBOL, LISP, BASIC, and the ALGOL family. The primary difference between these languages is who they were designed for. FORTRAN was designed for scientists and engineers, for whom solving equations on computers was the primary task of programming. COBOL was designed for businesses, being especially expressive so that business people could take advantage of the computer age. LISP was designed by computer scientists, and was expressive enough for research into the fundamentals of computation. BASIC was designed for beginners to learn programming. Finally, the ALGOL language was modified by computer programmers, and evolved into a huge family of popular languages such as C, Pascal and Java. Some of the above languages are no longer quite as popular as they once were. This will be the definition of "failure" we will use here. The question is why did they fail? The first stand-out is COBOL. Unfortunately, its design to be humanly-readable by business people was its downfall. Businesses found that it was possible to hire programmers to look after their computers. Programmers would then gravitate to languages designed for them, rather than their managers. Thus over time, more and more business functions would be programmed in languages such as VB, C, C++ and Java. Now, only relic software tends to be still be written in the language. BASIC suffered a different fate. It was the language of beginners. Those just learning to program on microcomputers would use the in-built BASIC language to start off with. As time progressed, microcomputers were replaced by personal computers running Microsoft operating systems, or macintoshes running Apple's. The language evolved with time, becoming Visual Basic once the desktop paradigm arrived. Since it could be used by those with little programming skill, it replaced COBOL for a while. Why pay for an expensive compiler, if a cheap interpreter that comes with your machine is all you need? Recently, Microsoft has moved to the .Net system, leaving VB behind. Its replacement, C#, is an ALGOL family member closely related to Java. FORTRAN usage has waxed and waned throughout the years. At one stage, nearly all science codes were written in it. Its advantage was that there were no pointers in the language, and recursion was disallowed. This meant that all data reference locations were able to be compile-time constants. FORTRAN compilers could use this extra information to make extremely fast programs. Unfortunately, as time progressed fixed sized arrays as data structures became obsolete. Now, science works with arbitrary shaped grids, and even more complex representations of the real world. This required the addition of pointers to the language. Around the time that happened, FORTRAN went into decline. Now it is relegated to high performance computing workloads where new parallel matrix and vector operations recently added to the language still give it the performance edge. The ALGOL language family succeeded. The reason for this is that these were the languages written by programmers for programmers. As time progressed, these evolved into the system and application languages most commonly used today. Their advantage was that the more programmers that used them, the more the languages improved, and the more programs that were written in them. This provided a virtuous cycle, where more programmers were in turn hired to work on the programs that were written. This is an example of the network effect. The "worth" of a system is proportional to the square of the number of users of it, due to the number of interactions between users scaling at this rate. So why did the Lisp programming language family end up on the failure side? Some say it is due to the syntax. Lisp is notorious for its parentheses. I do not believe that is the reason. Many users of Lisp say that formatting quickly allows them to match bracket-pairs. Also, soon after the invention of the language super-brackets were created to quickly close off an arbitrary number of open brackets. This language feature is rarely used today. Finally, syntax-understanding editors have made most of the layout problems of Lisp nonexistent in this age. Another common complaint against Lisp is that it is a functional language. Could this be the reason for failure? Amongst all the early important languages, it alone is functional in nature. Unfortunately, I don't think reality is this simple. Lisp contains imperative features, and the ALGOL family can be used in a purely functional manner. If one wishes to code to a certain paradigm, certain languages may make that choice easier to make. However, modern languages are flexible enough to support many programming paradigms, and there is no reason a mostly imperative Lisp might not exist. Perhaps the problem with Lisp was that it used a garbage collector? Again, amongst the early important languages, it alone had one. Garbage collection requires more memory and computational resources than manual memory management. Could the lack of memory and low performance of early computers have held Lisp back enough to slow its adoption? Again, I do not think this was the case. The complex programs Lisp was used to create would require something with the complexity of a garbage collector to be written any way if they were implemented in another language. The proverbial statement that any complex enough program eventually contains a poorly written implementation of Lisp does hold some weight after all. The reason Lisp failed was that it was too successful at what it was designed for. Lisp, alone amongst the early languages was flexible enough that the language itself could be remade into whatever the user required. Programming with the other early languages involved breaking a task into small sub-tasks that could then be implemented. The larger tasks could then be implemented in terms of the smaller ones. Lisp was different, due to its power, a programmer would be able to design a domain-specific language that would perfectly solve the task at hand. Due to the orthogonality of the language, the extensions written would work seamlessly with the core language. So what is the problem with creating domain-specific languages as a problem solving technique? The results are very efficient. However, the process causes Balkanization. It results in many sub-languages all slightly different. This is the true reason why Lisp code is unreadable to others. In most other languages it is relatively simple to work out what a given line of code does. Lisp, with its extreme expressibility, causes problems as a given symbol could be a variable, function or operator, and a large amount of code may need to be read to find out which. The reason Lisp failed was because it fragmented, and it fragmented because that was the nature of the language and its domain-specific solution style. The network effect worked in reverse. Less and less programmers ended up talking the same dialect, and thus the total "worth" ended up decreasing relative to the ALGOL family. If one were designing a language now, how could this problem be prevented? If expressibility is the goal of the language, then it must somehow be moderated. The language must have deliberate limitations that allow for readability of code written in it. Python is a successful language where this has been done, where some of these limitations are hard-coded, and others exist via the use of convention. Unfortunately, so much time has passed, and so many Lisp variants have been created, that yet another new language based upon it is probably not the answer. There simply will not be enough users to make a difference. Perhaps the solution is to slowly add Lisp-like features to languages within the ALGOL family. Fortunately, this seems to be what is happening. The newer languages (C#, D, Python etc.) tend to have garbage collectors. They also tend to be even more orthogonal than the older languages. The future may eventually contain a popular language that behaves much like Lisp. |
About Us | Returns Policy | Privacy Policy | Send us Feedback |
Company Info |
Product Index |
Category Index |
Help |
Terms of Use
Copyright © 2019 Lockless Inc All Rights Reserved. |
Comments
reader said...Anyway, there is one thing you didn't include: While the popularity of Common Lisp has wayned over the years it is now fluctuating. The essays and books of Paul Graham and others have caused some adoption. Nowadays, a small but active Common Lisp community prospers and provides libraries, documentation and support for newcomers.
The Scheme language is still in use at many universities as a means of teaching "fundamentals of computation" to freshmen.
Very recently, the Clojure language has gained some traction among ALGOL-family programmers. Clojure departs from Common Lisp heritage quite a bit, but offers a rethought and modern Lisp-like language running on the Java virtual machine.
I could not agree more.
Lisp's strength seems to also be its weakness at the same time.
Sad, somehow...
You can argue a lot about pros and cons of domain specific languages, but the fact is that you decide whether to use a DSL or not. Lisp just makes it easy to develop DSLs, but it does not forces you to do so all the time, for all aspects of a problem, etc.
I believe that Lisp failed for only one reason: ignorance! I personally did not know anything about Lisp, I have always been taught and forced to use other ALGOL-like languages for more than 12 years. I discovered it by change and now I do not want to use the other inferior languages any more. Why? I would just feel stupid and there is no advantage: rather, it takes more time and money to develop. This is a common situation amongst people who are rediscovering Lisp.
My 1-cent piece of advice: Learn Lisp well, use it for a project, compare the implementation of the same program with others, made in other languages you know and then you will agree with me and thank me forever :-)
"Super-brackets" don't help, as you noted, they're rarely used. But saying that "editors have made most of the layout problems of Lisp nonexistent" is nonsense. The problem isn't *layout* (which a pretty-printer can do), the problem is that a properly-indented programs is still painful to read. Its lack of infix notation - which even tiny BASICs supported - is absurd today. You need to have a *clear* notation when working with other people - Lisp is not that language.
I agree it can't be that it's functional, since it need not be used that way. And it can't be the garbage collector; nearly all modern languages have one. Supporting domain-specific languages isn't a problem; that's a strength, and should have made Lisps *more* popular.
The claim "Lisp failed was because it fragmented" may describe Scheme, but it doesn't describe Common Lisp at all. Common Lisp is quite standardized, and all the different implementations are highly compatible.
You later note that "The language must have deliberate limitations that allow for readability of code written in it." Well, no. You don't need limitations, you need a readable notation. Python is fantastically readable - indentation is enforced, infix is built-in, function call notation is just like math class.
An alternative is to add abbreviations to Lisp readers so that you can optionally use an easier-to-read notation for common conventions. Such abbreviations need to be general and homoiconic; past efforts failed because they didn't meet those criteria.
Please take a look at the "readable" project at:
http://readable.sourceforge.net
You may find that it's possible to have an alternative and better Lisp notation.
Thanks for reading.
What is harder to grasp is semantics.
In my opinion many programming languages have too many features and most programmers fail to understand how even a small fraction of those features is used properly.
Just my opinion...
http://lisp2arx.3xforum.ro/post/21/1/YouTube_Quickhelps/
programming language between C and LISP
You write C/C++ code and LISP code , and the result code after compilation process wiil be 100%LISP(all C+ lines will be LISP lines source)
www.youtube.com/watch?v=91mwgtNSIrE
HomePage http://lisp2arx.3xforum.ro
Both Algol and Basic are derived from the Fortran family. I think the Algol family and imperative way was easy for me to learn since I've written Basic in the beginning on my C64 and in that family C was easy since I knew assembly for both x86 and 68000.
When you think in Algol, programming a Lisp dialect would be very difficult. Though I bet Lisp is easier to learn for a Algol programmer than Haskell is.
In the C-family, for instance, we have blocks of statements in braces '{}' delimited by ';' and a convention to write one statement per line. We have a function call syntax that uses parentheses '()' and separates arguments with ',', and a convention to not split a function call across lines. We have a special syntax for accessing array elements '[]', even though 'a[b]' is equivalent to '*(a + b)'. We have a special syntax for loops with statements within parentheses 'for(;;)'. We have a special syntax for case labels 'case a:', ... I could go on like this for quite a while, but the point is, that these diverse syntax elements convey a lot of meaning at one glance. A particular strength of C over languages like Pascal and Fortran is that this syntactic diversity is nevertheless very concise, allowing a programmer to grasp a lot a meaning from a small amount of text.
Lisp, on the other hand, has only one syntactic structure that's used over and over again: the list '(a b)'. The only syntactic variation is when a pair is explicitely used '(a . b)'. That means, whenever a programmer sees something listed within parentheses, he knows nothing about what the code is supposed to be doing. He even has to read the context to know whether this list is interpreted as data or as code. Perhaps it's first interpreted as data, modified further, and then executed as code. Even when he knows that it's actually executed as code, he still has to look at the first element of the list to see whether the code is supposed to implement a loop, a condition, call a function, define a function, declare some variables, ...
Of course, this deficiency can somewhat mitigated by good formatting, but I for one believe that formatting becomes more effective with more diverse and expressive syntax, and that Lisp failed because it cannot easily convey the structure of the code to the reader.
The power of the LISP is great, but its syntax (lack of) renders it absolutely unreadable. Python is great and even C is *much* *more* readable. No, I shouldn't have to learn to "not see the parenthesis". They just shouldn't be there.
Add two things to the reason LISP failed:
- the explosion of open source, UNIX, and the web, all almost 100% using C. Before that other languages championed by a small or medium sized group might have had a chance.
- the smug superiority complex and unfriendly attitudes of a large number of anonymous lower echelon would be LISP advocates. It's framed as defense - after all "LISP is the greatest, blah, blah, why are you hating on ()". Well, now that history has soundly *proved* otherwise, instead of getting defensive, let's just honestly ask ourselves "why?".
For that, asking why, this was refreshing to read! Thanks!
But in Lisp, you are actually reading the AST that represents the code.
> is neither caused by too many parentheses, nor
> by the lack of distinction between operators,
> function, and variables. In my eyes, what makes
> it unreadable is its lack of structural variety.
I'd think this is mostly a training effect. My programming beyond lectures (both computer science lectures and "programming for scientists") so far was mostly restricted to data analysis, but overall I've used some amount of python and Java. Yet, later I got invested into emacs and starting from simply customizations of modes I started writing my own libraries. After some time I actually found the structural variety of other languages awfully distracting even in Python.
Genrally I found reading the emacs lisp libraries more feasible than reading another persons python code. Surprisingly the absence of a true module system generally helps readability, as it causes each function call or variable access to be clearly marked with the module prefix. Kind of as if python required fully classifying the module name for each function call (except that if it did module hierachies would have evolved to be much more flat and module names as short as possible while remaining unique).
Now, because of Lisp, I see Python as an ugly language (intermixed paradigms that demand synchronization when you need to change something small), but also I understand that that is an architectural compromise. In this world it's hard to be all nice and shiny.
Yet, I must admit I *really* want to use Lisp-based language, even functional-only, e.g. Scheme, but it is very hard to do so and I'm afraid I won't be able to reach that "light in the end of a tunnel" in my lifespan.
Y = sum(x,1,4,sum(y,3,8,x+y));
In the above the inner sum(y,3,8,x+y) would generate an unnamed function to pass to the outer sum call. The variables x and y are not passed by value but by address. In the case of variables they are akin to pass by address address in C. It gets a bit confusing when recursion is involved.
Borrows B5500 had 48 bit word memory with 3 flag bits. The flag bits implemented the cal by name of ALGOL. it was a stack machine so when function was loaded onto the stack the call by name fag would cause it to be called. The compiler would generate unnamed functions when expressions were used as arguments. A variable would be a simple indirect reference. An error would occur writing to a function.
In the above the inner sum(y,3,8,x+y) would generate an unnamed function to pass to the outer sum call. The variables x and y are not passed by value but by name. In the case of variables it is the came as a address & reference in C.
ALGOL died with the machines designed to run it. It didn't have a big user vase to start with.
The only thing modern block structure languages have in common with ALGOL is their block structure. By the time C was being developed there were several block structured languages.
LISP was developed for AI research projects. It was tried for other functions and found lacking right from the start. APL is another failure and like LISP was unreadable. Those languages are are easy to program and near impossible to read.
There was a block structured LISP but nothing came of it. There certainly application were a list processing language is useful. AUTOLISP worked well in AUTOCAD.
One can in C++ implement a lisp list class. There are some specilized languages like TREEMETA to make use of lists.
The real reason that all the old programming languages are dying out is the personal computer. Very few would run on a small computer. And by the time personal computer evolved to where you could run mainfram languages on them no one was interested. I think PASCAL lasted the longest. PASCAL was meant to be used for teaching programming and lacked many features need for real world projects. Extended PASCAL compilers fixed the problems in different ways and eventually were droped in favor of C. It was free and available on most platforms. And along came WINDOWS and eventually object programming in C++.
Only now do we see a real renaissance of Lisp: why not earlier? Because it's not that easy (for common coders) to recognize genius!
But, in the long term, genius will always rule out all its competitors. Currently we are living the most popular Lisp age ever, so, Lisp seemed to fail, and when everybody thought it dead, it resurrected, to live forever!
this retrospective of old-school programming languages is interesting, but it goes against almost everything I've read about CS history and Lisp. Could you, please, cite your sources?
Java is acknowledged as being designed so mediocre programmers can do something useful without blowing a foot off. It limits the hell out of better developers.
I expect Common Lisp to have a resurgence and things like clojure to be gateway drugs to Common Lisp. :)
Lisp can be the most readable of all languages as you can build up on the language to a language that deeply matches the type of problem at hand.
Yet consider this: My SLR camera is far more powerful than the simple point-and-shoot camera in my IPhone. Yet there are more IPhone cameras in circulation than SLRs. What does this say about single lens reflex cameras? Not much, except that a better photographer will prefer an SLR over an IPhone. So popularity is not a measure of fitness for purpose. It might even be the inverse. What makes the IPhone camera more accessible to non-photographers, makes it less useful to professional photographers.
List processing is useful in generic programming. Generic C++ algorithms of modern C++11 almost invariably graviate towards representing everything as a list - only call these iterators & ranges. C++ also took onboard meta programming but has made this intractable and obtuse. Output and process are invisible. Lisp gets this right. Only problem is, only a very small number of C++ ever progress to the point where this matters: very large and complex systems. Incidentally this means that Lisp has a use case where meta programming is concerned. Clojure on the JVM with Java and more recently Clasp on the LLVM and C++. Watch this space...
https://chriskohlhepp.wordpress.com/convergence-of-modern-cplusplus-and-lisp/
https://chriskohlhepp.wordpress.com/metacircular-adventures-in-functional-abstraction-challenging-clojure-in-common-lisp/
http://www.reddit.com/r/lisp/comments/2tms47/clasp_02_a_new_llvm_common_lisp/
Chris Kohlhepp
I think the real reason for its lack of greater popularity has more to do with the combination of lack of one leading implementation, and real lack of standardized libraries, partly because of the lack of a standard reference implementation. Yes, you have an ANSI standard, but who implements it to the letter?
In Python, you have a killer and beautiful language with amazing and powerful libraries which allow one to get stuff done right away.
Anyway macros are not the reason why I like Lisp's syntax but mainly because everything in lisp is a form which returns a value(s) with the only exception being declare expressions which are compiler directives rather than forms. Even constructs such as if, cond, case, loop, do (if, if-else-if, switch, while) return a value this means that you can write something like:
(setq var (if (condition) t-value f-value))
a literal translation to C would be (pretend x is an integer) :
int x = if (condition) { t-value; } else { f-value; };
except this is not possible in C since if is a special flow control which not only does not return a value but cannot appear in the middle of a statement in-fact if is not a statement so to translate that code into c correctly you would have to either declare int first, and then use if to control which setter code is executed:
int x;
if (condition) { x = t-value; }
else { x = f-value }
So much repetition though I can of course use the ternary operator:
int x = condition ? t-value : f-value;
though not only did it take me a very long time to figure the ternary operator out, it is no good if you only have a t-value (true) or only an f-value or multiple-conditions if, else-if. Suddenly C seems so ugly (just look at all the braces and semicolons), verbose and limiting compared to lisp. This syntactic feature of lisp alone made love the syntax since I can recall so many moments in C where I wished could just write somevar = if (some-conditon) { some-value; }; but couldn't. Note I recently learned lisp after programming in C, C++ and Java so this isn't some bad habit I picked up.
Another useful syntactic feature is the prefix notation which everybody hates. I love prefix for two reasons, number being that arithmetic operations are not some super syntactic constructs but regular functions, which function as regular functions. I can pass these regular functions as higher-order function arguments to mapping and filter functions e.g. I want to add two lists no problem (mapcar #'+ list1 list2) here I pass the + operator as a regular function, how would you do it in C? you cannot get the function pointer of an operator. The second feature I love about prefix is the ability to perform comparison operations on more than two arguments at a time (= a b c d) which check whether a b c and d are equal. In c you may think it is possible to write a == b == c == d but this is wrong since c == d returns a boolean which will then be compared to the value of b thus you have to write a == b && b == x && c == d again so ugly and so much repetition. Checking whether y is in between x & z no problem (< x y z) in C you would have to write x < y && y < z, thinks get worse if you have a list of values.
I used to think Lisp's syntax was bad but when I actually tried it out I realized it has numerous advantages and is the way it is for a reason, getting rid of the parenthesis would get rid of extensibility through macros and would destroy the everything returns a value rule. Using infix may seem more natural than prefix but in itself creates many problems which prefix does not have.
CPL is an ancestor of the simpler systems language BCPL, which is an ancestor of C.
It's the government's fault for cutting funding to Lisp projects, not Lisp's fault for not delivering what the government wanted.
It's another OS's fault for being able to run on cheaper computers more efficiently, not Lisp's fault for needing expensive hardware.
It's another language's fault for being more readable, not Lisp's fault for being less readable.
It does no harm to learn it and many if not most - as I experienced - changed their mind after some weeks when dealing with the "horrible syntax". Why is that so? I can't say for sure, however, I believe that there is some "ah" effect because it might radically change one's way of thinking (at least to non-math nerds). Certainly, without the existence of auto par completion editors nowadays I wouldn't bet my wig ...
The fact is LISP has not failed. In fact LISP is still used for some of the most advanced work in AI. I know because this is what I do. I use LISP every day, and in fact I am developing a next generation LISP machine, because LISP is the best language for symbolic computation, but using it on existing systems, whether Linux or Windows, is very inconvenient.
It is programmers and systems developers who have failed. So many problems were already solved by the early LISP machines. It makes me sick to see progress taking the form of a drunkard's walk.
Fortran and COBOL are still used today and are still being updated and revised. COBOL is one of the most critical languages in the world. Algol 60 is still used today on the Unisys mainframes. These languages are used by a smaller proportion of programmers because mainframes are a smaller proportion compared to the 1960s.
Instead of saying the "ALGOL language family succeeded", it's more accurate to say that successful languages were influenced by Algol. If an entirely different person or people make an entirely different language, that's replacement, not evolution. C is not an evolution of Pascal or PL/I even though it replaced them in some places.
Garbage collection was an important feature of Algol W, Algol 68, and Simula 67, which are descendants of Algol 60 and from the 1960s. These languages were never widely used, but were widely known and influential to both theoretical computer scientists and practical language designers. Orthogonal as it relates to programming languages is also an Algol word. Algol programmers believed dynamic typing is a special case of static typing.
The article is so full of mistakes or misleading claims that it's scary.
Just to address two issues:
1. Readability: Lisp is one of the most readable programming languages out there.
So much, that this is one of the reasons most Open Source Lisp code out there (github, etc) has practically zero comments: The code itself is so obvious, that one seldom feels the need to comment the code.
The reason Lisp can be readable is that it has many constructs (i.e. many control flow statements); so a good coder can use the one that "looks the most natural" within the code. Another reason is that the language is so flexible and extensible that you can create the language constructs you need to express the problem in the most explicit (and thus easy to read) way.
2. Standarization: The article claims "The reason Lisp failed was because it fragmented (...) Less and less programmers ended up talking the same dialect". This is totally misleading.
Common Lisp is an ANSI standard, there are many implementations of CL out there (i.e. Allegro Common Lisp, LispWorks, Clozure CL, SBCL, Armed Bear Common Lisp, CLISP, Embeddable Common Lisp, etc) and they are so highly compatible that the code written in Common Lisp will usually work straight away in any of those compilers (implementations), with no change needed.
Things that the ANSI standard does not cover, like for example threading or interfacing with C language, are already available as portable libraries, which -as the name implies- will work straight away regardless of the Lisp implementation.
So, next time, try using more Lisp...!
(defun is-op-right? () NIL)
(if (string= (is-op-right?) nil)
(pprint "go away :)"))
Now, the third AI era is all about machine learning and related technology.
AI is more a branch of statistics and as a result AI is developed with Python and R.
There is no LISP/Schema API for Tensorflow if Scikit-learn...
But LISP remains great and when machine learning reaches its limits, it may become more popular again (well if there are still some people able to program in LISP, because programming in LISP is probably more demanding than programming in PHP...).
> 1. Readability: Lisp is one of the most readable programming languages out there.
> So much, that this is one of the reasons most Open Source Lisp code out there (github, etc) has practically zero comments: The code itself is so obvious, that one seldom feels the need to ? comment the code.
This is a common trap many lisp users fall into as they assume their code is obvious to everyone. The moment you write a procedure and forget to annotate the type we again end up with another procedure of `Any -> Any -> Any' which tells us nothing about the code. The name may be descriptive but it does not tell us what types are valid at all thus it is rather useless. What's the lisp solution? Add the type as a prefix! But that does little other than give you some of the information as some procedures have optional arguments which change their behaviour making lisp again unclear.
>2. Standarization: The article claims "The reason Lisp failed was because it fragmented (...) Less and less programmers ended up talking the same dialect". This is totally misleading.
It is correct as Scheme, Clojure, LFE, and many other exist not only commonlisp which is the most complex system out of the ones mentioned.
I find there are so many comments here that are blinded by "the power of lisp" to see that the world has moved in the direction of ML style languages more than that of lisp. Algol languages continue to be the most widely used yet we're seeing quite a large number adopt features from languages such as haskell, Agda, rather than that of lisp. Now you could argue that lisp can implement most of the features but that doesn't make those features something that came from lisp.
This is not the solution at all and I have never, ever, ever seen such a code convention ("add the type as a prefix").
You are just guessing as an outsider.
You think that there is a problem with Lisp being dynamically typed and that we suffer because of this. This is very very far from the truth, at least for Common Lisp, the "complex" lisp. You know why? Because of two things:
1. Strong typing. Of the dynamic languages, CL is probably the most strongly-typed language. There are almost zero implicit casts; if a function works only on type X and you give a type Y, you will get a runtime exception (more on this later)
so ML users say "yeah, you get a RUNTIME exception, and you have to start all over again! hurr durr"
but...:
2. CL is an interactive programming language: The running image is a living thing. Get a runtime exception because of a type error? (or because whatever reason)? No problem! Correct the offending function, recompile it (which takes a dozen milliseconds) and RESUME program execution. That's right, without having to start the program again, without having to recompile the whole codebase again.
>the world has moved in the direction of ML style languages more than that of lisp
I think ML languages are a great thing (and i'm sad that SML hasn't got more traction). However, they are the polar opposite of interactive development. Thus, they are not a replacement, just a different thing.
(lisp sucks the balls)
(lisp (sucks (the balls) )
(balls lisp sucks the)