Mar 22, 2023
Backing Into Clisp
I've been using the venerable Common Lisp implementation Clisp again for a month. It was my favorite Lisp implementation before I starting using Scheme around 2010. It's a great implementation because Clisp's source code is so well written. It's also an implementation that's self contained have its own garbage collector. I feel like I've come back home.
Feb 19, 2023
A Change in Mlisp's Future Development Path
I'm suspending Mlisp development towards using future Scheme language trends. In some sense, Scheme development into the future is bifurcating, and overly complex. It's probably an "intermediate" phase for Scheme right now, and will get better in the future.
I'm doing an almost a 180 degree turn in thinking about using more Scheme language constructs for Mlisp. This includes using continuations which I've seemed to hit a wall with. I've been having major problems pushing forward with the idea of using continuations within the last couple of months. Except for Chez scheme, I'm going to stop looking at the other Scheme implementions (which may have a promising future). It's bordering on paranoia, so I'll leave this alone for awhile.
Meanwhile, the old Common lisp does not seem to look as old as I thought it did now. Things change, but I did not think it would happen this fast. There's a whole bunch of new things I learned during the last few months leading to this. I suppose I have to make this entry if I going to continue with this log. We all know the pace of change is large, but also can be a welcoming, wonderful part of it all.
Feb 13, 2023
The Downside of Parallel Processing
A couple of web browsers I use run parallel threads. I think most of them do nowadays. One web browser I use really gets "sick" sometimes, acting sluggish. It eventually recovers a little if I keep clicking on stuff. But it never fully recovers until I restart it. I can see how many threads it's running using the NMON utility which neatly displays the utilization for each thread. When the browser gets "sick", I can see only one thread running at 100%, while the other threads are idle.
I think what's going on is that one thread hits a "race condition", and it blocks the others. There's a bug somewhere in browser that they haven't found yet because I think it's very hard to find. This why good debuggers are so valuable for parallel software development. It's a place where you'll probably be able to sell your parallel debugger if it's good.
If I remember right, Chez scheme may have facilities to "politely" kill a thread if it goes whacko. They're trying hard to architect Chez scheme's parallel processing software to mediate this kind of trouble. So as good as the programmers developing Chez scheme are, they know it's hard to fully overcome the complexities of parallel programming. Who dares enter this place!?
Updated paragraph: Mar 6, 2023
This is why I'm reluctant to fully commit to running Lparallel. Better just stick with Bordeaux threads, a more reliable pthreads kind of wrapper for Common lisp.
After studying Lparallel for a couple of months, I now definitely feel my fear of using Lparallel is justified because of its complexity. It's a good idea in principle worthy of study, but is terrifying to maintain as a core application. I'm just not smart enough.
Feb 7, 2023
Adventures in Parallel Processing
During the past two years I looked at lots of lisp implementations in an effort to understand the inter-relationship between continuations and parallel threads (POSIX pthreads). Chez scheme uses sub-continuations ("Threads yields Continuations" 1997 and "A Scheme for Native Threads" 2009). And there's a lot more.
I tried to get the right perspective moving forward into the future. But for now, I have to step back and limit myself to improving Mlisp's garbage collection using pthreads. The POSIX threads idiom allow a limited form of asynchronous communication using mutexs and semaphores.
I think that in the future we will see Lisp moving to incorporate more explicit message passing in the general sense. That's what operating system developers called IPC or inter-process communication in the past. IPC is inherent in message passing languages like Smalltalk.
It seems to me that there are also inherent limitations in the use of multi-core processors using shared-memory like those manufactured by Intel and AMD today. To develop really powerful, autonomous systems you need correspondingly fast communication methods between processors in the hardware.
I can't seem to get away from the beauty of the working paper by Henry Baker and Karl Hewitt published in 1977 called "The Incremental Garbage Collection of Processes". In section 4, Time and Space, they wrote about using futures in more than one processor. They foresaw with incredible intuition into how to develop parallel software today.
Feb 3, 2023
A Little Note: Looking at Ecolisp Lisp
Kyoto Common Lisp, like Franz lisp, is architecturally similar to Mlisp's use of stacks. Giuseppe Attardi's fork of Kyoto Common Lisp, Ecolisp, was a partial attempt to parallelize lisp. Ecolisp tried to spawn threads using ideas from computing using continuations. After thinking about this for awhile, I now believe using continuations for multiprocessing should be done sequentially because doing otherwise would become NP-complex. I think this is the belief of most Lisp implementors. Maybe an exception would be the author of Chez Scheme.
The ECL fork of Ecolisp stripped away Giuseppe's method of parallelism. ECL uses the thread-safe Boehm GC. I see a grey zone in Ecolisp from which a lot can be "borrowed" to produce a better Mlisp implementation of parallel threads. This is especially the case with Ecolisp because the entire memory allocation and GC method in the "code deep down" of Mlisp would become affected. It's hard to know where to start. At least I know, that for now, not to mix continuations and parallel threads from the many code examples I've read.
Jan 26, 2023
The Beauty of Implementation
I'm learning that following a beautiful ideal is a really subjective fault. I just reread AIM-454, "The Incremental Garbage Collection of Processes" by Henry Baker and Carl Hewitt. It's was written in 1977. Because of the way Intel and AMD processors are evolving with multiple cores the ideal, beaufiful way of implementing parallel threads is inefficient and complicated. I have to step back a little.
So I think the way to go for Mlisp is to use a "primary" thread for GC, and other threads for processing the application functions. It's the way most GC's, by necessity, are designed today. This gets me down a little, but some good solution will eventually come by.
Jan 6, 2023
Futures and Parallel Processing
A continuation is a context switch as a temporal event. A continuation naturally introduces the computation to paralllel processing by enabling the context in which the code is executing to split into two parts. Then these two parts (or threads) can execute at the same time.
The continuation is executed in the future, hence, this split execution form or state is called a "future". The result of this computation which occurs in the future with respect to the continuation is called a "promise". Rather cute names describing the state of computation.
If there is a natural way to do parallel threading without really convoluting garbage collection, this will really ease the pain of developing Mlisp if this turns out to be possible.
Nov 27, 2022
It Was Just A Dream
Ten years ago I dreamed of designing an neuronal network based on parallel string matching. I called the project Kali after the Hindu goddess of ephemeral energy. One of the Lisp programs I studied then was Daniel Bobrow's Meteor [ref: LispBook_Apr66.pdf]. Meteor is a program which sort of preceded Charles Forgy's OPS5. Meteor used Victor Yngve's COMIT system [ref: MT-1958-Yngve.pdf].
The collection of programs or "program sketches" in Kali, are based on string matching objects using frames and slots. Meteor's pattern matching algorithm fits within using a frames approach because a word pattern is of the form (element_1 / atom_0 atom_1 ...). The pattern matching strategy in Meteor mirrors the concepts using in COMIT conceived by Victor Yngve (1958).
I've been thinking about using the source code of Frulekit written by Peter Shell to design an UPS version 2, but it uses constructs which are inefficient, eg., several very long defstructs, and too much confusing, and archaic Common Lisp code. On the other hand, it has a clear design very close to the formalism in COMIT.
The Lisp source code for Meteor is in Appendix 1 of the LispBook_Apr66.pdf. Meteor's Lisp code is very stripped down, and too sketchy for use in a real production system. But it makes Meteor an excellent place to start designing UPS2 in a modern Lisp style. For a long while, ten years, this has been just a dream.
Nov 22, 2022
The Importance of Memoization in Recursive Functions
I've been recently thinking about rewriting the production system Soar in Lisp. The old Lisp version of Soar is version 5. There's some OPS5 skeletons in it, but it's evolution from OPS5 is messy; not worth the investment to fix it in my opinion.
The core of OPS5 and Soar is Charles Forgy's Rete algorithm. So in order to rewrite Soar, I was thinking about doing a clean implementation of it. Furthermore, I think I never really had a genuine feel for the complex Rete algorithm, so if I implemented a version of it myself, I'd be better off.
In "Production Matching for Large Learning Systems" by Robert Doorenbos (1995), page 12, paragraph 1, the author mentions that one of the most important features to fasten Rete is "state-saving". Another feature is "node-sharing" between productions. This is all classic memoization - a method to cache code in recursive functions. The price for conceptual simplicity is program code complexity. But I think the overall program code complexity will be less than what exists in the old Soar-5's Lisp code.
So I'm dreaming about a possible fantastic code speed up in Soar if I can get through memoizing the production system algorithm in an efficent way.
Aug 23, 2022
Mlisp Scoping Rules for the Set Operator
From the start of developing Mlisp, I was never sure of how I should define the set operators. Common Lisp makes the most sense to me on the usage of the set operators in initializing variables, ie., the "defconstant", and "defparameter" operators. I stayed away from using "defvar" because this operator will initialize an already existing global variable which seems unneccesary to me. These operators are consistent with the notion of using dynamic or global variables at the top-level.
In Scheme, any variable is initialized with the operator "define". The operator "set!" is supposed to work on modifying an existing variable. Since Scheme is supposed to be lexically scoped, the notion of having a dynamic or global variable does not really exist. Yet, you can sort of define a global variable in Scheme by defining it at the top-level. You can change a variable defined at the top-level by using the "set!" operator at the top-level, or in the procedural level below top-level. (Also note Scheme's "make-parameter" which defines a variable at the top-level.)
The set operators in Common Lisp and Scheme are consistent if you don't try to mix these features from both languages. But after all these years of mixing features of Common Lisp and Scheme, I've come to the conclusion that you cannot define consistent set operators in a simple way using the scoping rules of Mlisp.
After trying to use Chez Scheme a little more, I noticed that the "set!" operator can initialize a variable inside a "let" statement. For example,
---------------------------------------------- (let () (set! x 1) ; set closure inside a let (printl x)) ----------------------------------------------works.
Mlisp uses the following set operators:
set (like set in CL), set* (like defparameter in CL), set+ (like defconstant in CL). set! (like setq and setf for top-level global variables in CL), set= (like setq for local lexical variables in CL),Mlisp set operators sort of work, but has not been perfected yet. Since I started programming using Common Lisp and then learned Scheme, I'm inclined (or prejudiced) to the Common Lisp usage of dynamic variables. In some ways, the Common Lisp's way of defining a variable, in general, is simpler, and maybe cleaner.
Jan 9, 2022
Continuations and the Unfolding of State
Continuations occur in program execution when the context for evaluation changes.
Dec 29, 2021
XLisp with Continuations was Truely Experimental
The way David Betz wrote Xscheme in 1988 was remarkable because of the simplicity and boldness of its implementation based on the concept of using full continuations. Given all the possible traps, Xscheme works remarkably well without seg-faulting.
Xscheme's scoping rules are those used in Common-lisp. So Scheme procedures or operators like define and set! do not work in the xscheme's (and hence mlisp's) let procedure. This is part of the evolutionary problems in mlisp's future.
The following is an article written by Duha and Felleisen in 1991 which is based on Alan Bawden's note that Scheme's letrec and define procedures are not "purely declarative facility". The authors of this article mention that Chez scheme implemented this as a fix which was a "non-RRRS" feature. Chez scheme is the Scheme I've decided to use in the future after I completed a review of the Scheme's I've used in the past which is descibed below.
Updated: Feb 19, 2023
Updated: Feb 7, 2023
Updated: Nov 26, 2021
Nov 21, 2021
Lisp with Threads
What impresses me about the results of benchmark below is that SBCL is fantastic at numerical processing. Chez is nice to program in also.
This btree benchmark, which was derived from gcbench, is a good general test of the lisp implementation's garbage collection. The processor (Intel i5-3330) has 4 cores. The routine btree.l has basically no type optimizations. chez sbcl mit-scheme ccl mkcl ecl gcc btree.l (21) 25s 25s 25s 92s 118s 121s 44s (21) 35s 11.6s** (21) 23s* 9s*** 40s^* 42s^** 9s*^ ** btree-sbcl.ls - concurrent/pthreads with option "--dynamic-space-size 16384" *** btree-sbcl-2.ls - with option "--dynamic-space-size 16384" ^* btree-mkcl-2.ls ^** btree-ecl-2.ls - compile result is OPTIMIZE levels: Safety=2, Space=0, Speed=3, Debug=0 * btree-chez-2.l - chez native pthreads *^ btree-gcc-2.c - gcc with Apache apr thread pool (openmp) Less numerically intensive version (better for testing just GC) chez sbcl ecl mkcl ccl chapel luajit btree-3.l (21) 7.6s* 1.9s 8.9s** 10.4s^^ 18s*^ 7.9s 14.3s*** * btree-chez-3.l (time for btree-3.l(21) is 15s so using chez's parallel threads speeds up performance by 2x for this test. ** btree-ecl-3.ls ^^ btree-mkcl-3.ls *^ btree-ccl-3.ls - there is no "condition-wait" *** luajit runs parallel threads efficiently with popen (using interprocess communicate) instead of pthreads. chapel - Threads run efficiently on all cores, yet you really don't explicitly program for it.
Oct 19, 2021
Replaced Mlisp-0.7u with Nlisp-0.4l
Depreciated the xlisp-3 version of MLisp using the simpler code based from Nlisp. There's only one version of Mlisp now. Changes to Mlisp in the future will be a change of the stack structure to use a segmented call stack (ref: Representing Control in the Presence of First-Class Continuations, by Hieb, Dybvig and Bruggeman, 1990). JIT development will continue on Mlisp.
Oct 14, 2021
Performance Benchmarks on Nlisp
So far, I've studied the tutorial on creating a toy interpreter using libgccjit, and the examples for gnu lightning and libjit. Libjit abstracts away the little details of assembly coding.
The Boyer and Btree (aka GCbench) benchmarks reveals nlisp spents up to 80% of its time in the garbage collector, GC, under heavy loads.
In the btree.l, "binarytrees:", test, (nlisp -L btree.l 16), the profiler shows nlisp spent about 25% of its time in the GC. For a larger tree traversals, (nlisp btree.l 17), the profiles shows nlisp spent greater than 80% of its time in the GC. This degradation of performance is due to the GC repeatedly trying to allocate more memory, and continually traversing long trees to mark each accessible cell. As each cell segment gets larger, it will take an exponentially longer time to complete a sweep of the unmarked cells. The compaction of vector segments did not impact performance too much as it took only 2% of the running time.
Sept 29, 2021
A Little Review on Implementing JIT for Mlisp
Now, I seem to be sliding backwards when trying to move forwards developing a design for Mlisp. What's becoming clear to me now is that I could not develop a JIT without a virtual machine. That's the reason I gave up on the old nlisp which had no byte code compiler. It would be hard to come up with a simple way to write advanced procedures without embedding sub-procedures or subroutines within higher level procedures. For example, recursion fits within an execution model using the virtual machine. The lambda function defines a way to build procedures up from the integration of smaller sub-procedures. It's almost as if a fuller lambda expansion necessarily required the dispatching function of a virtual machine.
And that's the machine. That's what all the little parenthesis, ie./eg., (*(!*(%(&^(&^%$(**)*&^).....)))) are about. Might seem obvious in retrospect, but is necessary maybe to think about now.
David Betz's stack oriented Xlisp and hence Mlisp have a lot in common with Maclisp. Maclisp had it's origin in Scott Fahlmann's Spice lisp which led to CMUCL and SBCL. I spent some time reading the old code in CMUCL this weekend. It's amazing software considering it was written in the early 1980's when the C-language was still not in wide spread use. CMUCL was designed for special computers with its own microcode. The more modern CMUCL's virtual machine was written in C with a design which is rather a standard way VM's for lisp are written today. What's incredible to me is that it uses an "implicit continuation representation" (in ref: Design of CMU Common Lisp) design in its compiler. There's a well written story on this in Gabriel and Steels's "Evolution of Lisp", section 2.8, Scheme: 1975-1980. Scheme's continuation model can actually simplify the construction of lisp compilers. My interpretation of this is that "context", especially temporal, time ordered context (or "aka" continuations) is a necessary requirement for a descriptive language. Continuations is analogous to an operating system's context switches.
CMUCL and SBCL uses the ideas for continuation in its compiler, but never implemented continuations in the language itself. This makes sense because in practical applications today, the use of continuations is still not as efficient relative to using coroutines or threads. The Ruby language implemented continuations, but now makes it available only as an installation option. Practical languages like Ocaml versus SML/NJ do not have continuations. This perfectly fits the usage of the language. But for writing languages with greater descriptive powers, parallel processing algorithms maybe required due to the associative capabilities required in a short temporal intervals characteristic of spoken languages.
In the next couple of days I'll probably continue reading "From Folklore to Fact: Comparing Implementations of Stack Continuations" and "Implementation Strategies for First-Class Continuations". This is "where the rubber meets the road" as they say. Moving forward on Mlisp's JIT would parallel work like using GNU-Lightning in speeding up Smalltalk. Trying to create a specific design plan for this has me really absorbed.
Sept 15, 2021
Selecting the JIT Tool
I've been thinking about a JIT tool for developing mlisp for about a month, and during the last couple of days it's become obvious to me that I should try GNU Lightning first. This is because Lightning was originally designed to speed up Smalltalk's context switching between objects. Smalltalk objects are the counterpart to Lisp procedures with respect to the language's internal virtual machine. Both objects and procedures require the same kind of contextual management of its arguments and values. And in many instances these kinds of language implementations uses stacks to track of objects or procedures.
Lightning explicitly refers to continuations in the section on setting up procedures (ie., trampolines, continuations and tail call optimization). This is at the core of what JIT for mlisp is about.
The challenge for me is to implement JIT efficiently so that mlisp will run the ctalk benchmark, for example, as fast as possible. Most scheme implementations don't consider fast execution of continuations critical. In my opinion, if you want to create a machine which understands language, with all the complexity of words in its context, then fast context switching with continuations is essential.
So I'm focusing my attention now on Lightning, and will focus to a lesser extend on libgccjit or luajit's dynasm for now. I don't seem to have the time to looking into the Smalltalk's implemention of Lightning, nor maybe Ruby's JIT. The horizon for languages using objects like Smalltalk and Ruby shines bright, but they're intrinsically inefficent because of their exaggerated requirement to explicitly depend on object linkages (in the form of inheritance). But as time passes, the evolution of programming languages will follow Smalltalk's original design concept, and that's the same for mlisp, in my opinion.
Sept 11, 2021
The Nbody Benchmark
(time in seconds)
# of particles chez sbcl luajit
5000 0.15s 0.14s 0.01s
500000 1.54s 0.22s 0.16s
5000000 14.6s 0.98s 1.44s
50000000 188s 8.14s 14.5s
This is an extremely interesting computationally intensive
benchmark. SBCL and Luajit were notably robust at 50 million
The Luajit language implementation is extremely efficient. Most notable is its small size, and simplicity. It's a model implementation of a programming language for mlisp to try to follow.
Sept 9, 2021
States or Values, Ordered Sequences, and Procedures
The intuitive model that I follow for thinking about context switching using continuations comes from temporal sequences describing physical processes. For me, a simple description of a computational processes requires keeping track of a procedure's calculation within an "environment" for the procedure which includes its arguments, and also all of the data or states which can be associated its arguments. The procedure can become enfolded in the global data environment which is the context in which all computation occurs.
It's worth reading formal descriptions of using continuations to get a perspective on how wide the applications of continuation is. For example, John Reynolds' article "Definitional Interpreters for Higher-order Programming Languages" (1998), is a concrete explanation on continuations. The article provides one a really good background on continuations. This article explains relationships between "embedding continuations in values", and how procedures can occur as values.
How procedures occur as values in a programming language is a key idea.
Sept 3, 2021
Conceptually, I like using first class or full continuations for asynchronous event handling. Using continuations is not hard if you can keep things simple. Modelling the real world events requires asynchronicity. Thus in a rather insane fashion I'm leveling anything that gets in the way of continuations. This includes exception handling with throw/catch and unwind-protect for now. This does mean I won't go back to using these special operators in some form in the future if there's a simple way to do so. (Ref: Kent Pitman's comments on unwind-protect versus continuations.)
August 25, 2021
Mlisp JIT - Now The Tedious Coding
Moving to add native compilation to Mlisp requires spending more time on the bytecode interpreter. So I'm archiving Nlisp-0.3 for now. The complexity of Mlisp-0.7's coupled continuation/gc code requires me to still strip away alot of the continuation dispatch structures. So I'm not going to develop jit for Mlisp-0.7. Instead I'm using the latest Mlisp-0.6 lisp which I'll call Nlisp-0.4 to develop jit on. This jumping between lisp versions might seem messy, but there's still too much bugs (and complexity) in the continuation/gc part of the current Mlisp-0.7 for me to develop jit on.
Currently, the opcode tracing in Nlisp-0.4 still has lots of problems. It may take a couple of weeks for me to get this working correctly. In the mean-time I've been trying to understand how libgccjit works in gccemacs. Since the jit is complex and does lots of optimizations this might be a good tool to look into. The performance gains using libgccjit on emacs is almost 4 times (not the 10x boost I was hoping for :-)).
A good reference is the gccemacs wiki page.
|_byte code_| ---> libgccjit ---> |_native code_|
August 16, 2021
The Plan for Mlisp JIT
Mlisp is currently a bytecode compiler. What we need to do is get Mlisp to translate the byte-code to machine-code, or "mcode" (versus "bcode"), on the fly. The Mlisp module make_code_string produces what is call a "byte code string" (it's also commonly referred to as a "code vector") for execution. It was called a code "string" because the Mlisp variable bcode was typed as a unsigned string, ("ucstr bcode;"), in Mlisp.
So in essence we want to change make_code_string to be a make_mcode_string. The final code object for execution would be a machine code string. This means we have to used the X86 assembler for our machine whcih runs the linux operating system. The assembler I'll use is gas. I don't know if all this will work yet because I've never done this before. But I'm laying out a plan to produce machine code vectors. I don't know how far I'll get.
August 13, 2021
JIT for Mlisp
The lisp implementations which are fast are SBCL, Clozure and Chez. I've never looked at the source code for these implementations yet because I don't have a background in JIT or just-in-time compilation. The Chicken Scheme implementation is also fast but does a static compilation from Scheme to C code. The source code for Scheme2c looked pretty messy to me, and JIT toolkits like libJIT seems as if it'll be too much work for me to try to use.
However, I feel that Mlisp might be able to run the CTAK benchmark in the speed range of Chez if it used a JIT to compile its functions or procedures. Mike Pall's luaJIT project was amazing since it boosted Lua's speed by an order of magnitude. I'm starting to feel that I should study or use the methodology of luaJIT to increase Mlisp's execution speed.
Mlisp is a compact bytecode implementation originally written by David Betz. The theoretical design philosoply is essentially very simple, but its implementation is complicated because its components parts like context execution (continuations), code evaluation and garbage collection are really tightly coupled together. But I feel there's essential beauty and capabilities in Mlisp which makes it deserve a JIT compiler for its procedures. So in the future months or years, I might be working towards this goal. I've not spent too much time on this project lately, but getting this done requires this. But I am interested in doing this.
August 5, 2021
I've hit a wall trying to work with the continuation dispatch table. So for now, I'm putting it aside. I'll probably not go back to using it; since for me, it's too complex in design. I'm continuing in the direction of using the "Context" data structure, and plain "Jmpbuf" to do the context switching.
July 22, 2021
mlisp-0.6q is depreciated and archived. nLisp (David Betz's xlisp-0.17 which did not have a bytecode compiler) uses the same basic structures as mLisp, however, is does not have continuations. The old mxLisp which grew out of xlisp-3 does a really remarkable job of implementing continuations. So the path ahead looks like I'll be trying to merge the execution context flavored of code from xlisp-2 with the continuations in xlisp-3. Currently, as has been in the past, I am sort of inspired by elegance of David Betz's implementation of continuations.
April 21, 2021
mxlisp is being depreciated. I'm stopping work on mxLisp because I'm starting to understand its code better now. I can see the evolution of mLisp into mxLisp (David Betz's xscheme-0.28 which evolved into xlisp-3.0) alot clearer now than I did a few years ago.
Moreover, since I'm also seeing how nLisp (David Betz's xlisp-0.17 which did not have a bytecode compiler) uses the same basic structures as mLisp, I've been able to understand better now how the bytecode compiler was constructed.
I don't want to maintain 3 sets of lisps so I will focusing on nLisp which has a simpler code base, but still does not have continuations. I think this is the best part and most challenging part of designing nLisp. I hope this can happen soon, within the next year. "Stay tuned."