Home -> Software->Evolution



May 27, 2023

Speculative Computing?

Ref: (1) Speculative Computation in Multilisp, Randy Osborne, 1989
     (2) Parallelism in Lisp, Guy Steele, 1995

In the Computer Language Benchmarks Game's binarytree benchmark, the Racket entry #3 for spectralnorm defines a verb "for/par" using the "future" construct as:

-------------------------------------------------------
(define-syntax-rule (for/par k ([i N]) b)
   (let ([stride (fxquotient N k)])
      (define fs
         (for/list ([n k])
            (future (lambda ()
               (for ([i (in-range (fx* n stride)
                                  (fxmin N (fx* (fx+ n 1)
                                                stride)))])
                  b)))))
      (for-each touch fs)))
-------------------------------------------------------
The matrix Av, using "for/par" is define as:
-------------------------------------------------------
(define (Av x y N)
   (for/par C ([i N])
            (flvector-set! y i (let L ([a 0.0] [j 0])
                                  (if (fx= j N) a
                                      (L (fl+ a (fl* (flvector-ref x j)
                                                     (A i j)))
                          (fx+ j 1)))))))
-------------------------------------------------------

I really like the style in which Racket handled this problem. And the speed of execution is comparable to SBCL's entry. Chez's trouble with the execution speed for floating point numbers, ie., flvectors, have a solution.

SBCL's entry is more strait forword, following Bordeaux threads. But the philosophy behind Racket's style of implementing native threads more closely follows the line of the original Robert Halstead's Multilisp parallelism (1). Guy Steele's 1995 remarks (2) on "Parallelism in Lisp" are insightful, especially this notes on Futures and "Parallelism with No Side Effects".

When I first started using Chez Scheme, I noticed in its "Control Structures" procedures like "One-shot Continuations" and Engines which I suppose are to guard against the pitfalls of spawning threads. Common Lisp had this stuff since 2011 in the libraries Pcall and Eager-future, and also debugging software like 1AM. But having diagnostic tracing procedures directly in the native implementation is nice.

Mlisp has the built-in procedure call "promise". It's suppose to be the equivalent of Chez's delay. The promise verb is related to the "future" construct which is in the following Mlisp code which I borrowed from Yin Wang.

------------------------------------------------------- 
;; ref: chez-future.ss as formulated by Yin Wang
(load-lib "record.l")
(def-struct Fstruct result ready mutex cond) ; definition at toplevel

(def future (thunk)
   (let ((item (make-Fstruct false false (make-mutex) (make-condition))))
      (fork-thread (lambda ()
         (let ((result (thunk)))
                  (with-mutex (Fstruct-mutex item)
                              (Fstruct-result-set! item result)
                              (Fstruct-ready-set! item true)
                              (condition-broadcast (Fstruct-cond item))))))
      item))

(def touch (item)
   (let ((mutex (Fstruct-mutex item)))
      (with-mutex mutex (let $loop ()
                           (when (not (Fstruct-ready item))
                              (condition-wait (Fstruct-cond item) mutex)
                              ($loop)))
                  (Fstruct-result item))))


;; example -------------------------
(def fib (n)
   (def memo (a b n)
      (if (= n 0) b
          (memo (+ a b) a (dec n))))
   (memo 1 0 n))

(define thunk (lambda () (fib *n*)))

(set*  *n* 10)
(touch (future thunk))  ; => 55
-------------------------------------------------------

I can now see the "future" and "touch" pattern in most of the pthread code that I have written in the past.


Updated: June 5, 2023
May 16, 2023

The Racket Version of Chez

The Racket/Chez compiler (CS) includes the function "thread-join". This enables me to run the parallel version of Spectralnorm.

   ---------------------------------------------------------------------
    Mlisp code that runs with Racket/Chez (CS):  spectralnorm-chez.l
    Mlisp code that runs with Racket/Chez (CS):  spectralnorm-chez-future.l
    Mlisp code that runs correctly using SBCL:   spectralnorm.lsp
----------------------------------------------------------------------

For an entry of 5500, the output is 1.274224153 (the right one), and the execution time (i-3330 processor) is 27 seconds. It works. Just needs to be optimized now.


May 2, 2023

Using Lisp

Back in the 1990's I tried building rudimentary recurrent neural networks following Scott Fahlman's Cascor model. I wrote it all in the C-language because it was efficient. Now all of that looks like sludge compared to using modern Lisp today. Working with neural networks require tons of numerical processing, and Lisp does okay. Using modern Lisp makes this programming game fun.


Mar 22, 2023

Backing Into Clisp

I've been using the venerable Common Lisp implementation Clisp again for a month. It was my favorite Lisp implementation before I starting using Scheme around 2010. It's a great implementation because Clisp's source code is so well written. It's also an implementation that's self contained having its own garbage collector. I feel like I've come back home.


Feb 19, 2023

A Change in Mlisp's Future Development Path

I'm suspending Mlisp development towards using future Scheme language trends. In some sense, Scheme development into the future is bifurcating, and overly complex. It's probably an "intermediate" phase for Scheme right now, and will get better in the future.

I'm doing an almost a 180 degree turn in thinking about using more Scheme language constructs for Mlisp. This includes using continuations which I've seemed to hit a wall with. I've been having major problems pushing forward with the idea of using continuations within the last couple of months. Except for Chez scheme, I'm going to stop looking at the other Scheme implementions (which may have a promising future). It's bordering on paranoia, so I'll leave this alone for awhile.

Meanwhile, the old Common lisp does not seem to look as old as I thought it did now. Things change, but I did not think it would happen this fast. There's a whole bunch of new things I learned during the last few months leading to this. I suppose I have to make this entry if I going to continue with this log. We all know the pace of change is large, but also can be a welcoming, wonderful part of it all.


Feb 13, 2023

The Downside of Parallel Processing

A couple of web browsers I use run parallel threads. I think most of them do nowadays. One web browser I use really gets "sick" sometimes, acting sluggish. It eventually recovers a little if I keep clicking on stuff. But it never fully recovers until I restart it. I can see how many threads it's running using the NMON utility which neatly displays the utilization for each thread. When the browser gets "sick", I can see only one thread running at 100%, while the other threads are idle.

I think what's going on is that one thread hits a "race condition", and it blocks the others. There's a bug somewhere in browser that they haven't found yet because I think it's very hard to find. This why good debuggers are so valuable for parallel software development. It's a place where you'll probably be able to sell your parallel debugger if it's good.

If I remember right, Chez scheme may have facilities to "politely" kill a thread if it goes whacko. They're trying hard to architect Chez scheme's parallel processing software to mediate this kind of trouble. So as good as the programmers developing Chez scheme are, they know it's hard to fully overcome the complexities of parallel programming. Who dares enter this place!?

Updated paragraph: Mar 6, 2023
This is why I'm reluctant to fully commit to running Lparallel. Better just stick with Bordeaux threads, a more reliable pthreads kind of wrapper for Common lisp.
After studying Lparallel for a couple of months, I now definitely feel my fear of using Lparallel is justified because of its complexity. It's a good idea in principle worthy of study, but is terrifying to maintain as a core application. I'm just not smart enough.


Feb 7, 2023

Adventures in Parallel Processing

During the past two years I looked at lots of lisp implementations in an effort to understand the inter-relationship between continuations and parallel threads (POSIX pthreads). Chez scheme uses sub-continuations ("Threads yields Continuations" 1997 and "A Scheme for Native Threads" 2009). And there's a lot more.

I tried to get the right perspective moving forward into the future. But for now, I have to step back and limit myself to improving Mlisp's garbage collection using pthreads. The POSIX threads idiom allow a limited form of asynchronous communication using mutexs and semaphores.

I think that in the future we will see Lisp moving to incorporate more explicit message passing in the general sense. That's what operating system developers called IPC or inter-process communication in the past. IPC is inherent in message passing languages like Smalltalk.

It seems to me that there are also inherent limitations in the use of multi-core processors using shared-memory like those manufactured by Intel and AMD today. To develop really powerful, autonomous systems you need correspondingly fast communication methods between processors in the hardware.

I can't seem to get away from the beauty of the working paper by Henry Baker and Karl Hewitt published in 1977 called "The Incremental Garbage Collection of Processes". In section 4, Time and Space, they wrote about using futures in more than one processor. They foresaw with incredible intuition into how to develop parallel software today.


Feb 3, 2023

A Little Note: Looking at Ecolisp Lisp

Kyoto Common Lisp, like Franz lisp, is architecturally similar to Mlisp's use of stacks. Giuseppe Attardi's fork of Kyoto Common Lisp, Ecolisp, was a partial attempt to parallelize lisp. Ecolisp tried to spawn threads using ideas from computing using continuations. After thinking about this for awhile, I now believe using continuations for multiprocessing should be done sequentially because doing otherwise would become NP-complex. I think this is the belief of most Lisp implementors. Maybe an exception would be the author of Chez Scheme.

The ECL fork of Ecolisp stripped away Giuseppe's method of parallelism. ECL uses the thread-safe Boehm GC. I see a grey zone in Ecolisp from which a lot can be "borrowed" to produce a better Mlisp implementation of parallel threads. This is especially the case with Ecolisp because the entire memory allocation and GC method in the "code deep down" of Mlisp would become affected. It's hard to know where to start. At least I know, that for now, not to mix continuations and parallel threads from the many code examples I've read.


Jan 26, 2023

The Beauty of Implementation

I'm learning that following a beautiful ideal is a really subjective fault. I just reread AIM-454, "The Incremental Garbage Collection of Processes" by Henry Baker and Carl Hewitt. It's was written in 1977. Because of the way Intel and AMD processors are evolving with multiple cores the ideal, beaufiful way of implementing parallel threads is inefficient and complicated. I have to step back a little.

So I think the way to go for Mlisp is to use a "primary" thread for GC, and other threads for processing the application functions. It's the way most GC's, by necessity, are designed today. This gets me down a little, but some good solution will eventually come by.


Jan 6, 2023

Futures and Parallel Processing

A continuation is a context switch as a temporal event. A continuation naturally introduces the computation to paralllel processing by enabling the context in which the code is executing to split into two parts. Then these two parts (or threads) can execute at the same time.

The continuation is executed in the future, hence, this split execution form or state is called a "future". The result of this computation which occurs in the future with respect to the continuation is called a "promise". Rather cute names describing the state of computation.

If there is a natural way to do parallel threading without really convoluting garbage collection, this will really ease the pain of developing Mlisp if this turns out to be possible.


Nov 27, 2022

It Was Just A Dream

Ten years ago I dreamed of designing an neuronal network based on parallel string matching. I called the project Kali after the Hindu goddess of ephemeral energy. One of the Lisp programs I studied then was Daniel Bobrow's Meteor [ref: LispBook_Apr66.pdf]. Meteor is a program which sort of preceded Charles Forgy's OPS5. Meteor used Victor Yngve's COMIT system [ref: MT-1958-Yngve.pdf].

The collection of programs or "program sketches" in Kali, are based on string matching objects using frames and slots. Meteor's pattern matching algorithm fits within using a frames approach because a word pattern is of the form (element_1 / atom_0 atom_1 ...). The pattern matching strategy in Meteor mirrors the concepts using in COMIT conceived by Victor Yngve (1958).

I've been thinking about using the source code of Frulekit written by Peter Shell to design an UPS version 2, but it uses constructs which are inefficient, eg., several very long defstructs, and too much confusing, and archaic Common Lisp code. On the other hand, it has a clear design very close to the formalism in COMIT.

The Lisp source code for Meteor is in Appendix 1 of the LispBook_Apr66.pdf. Meteor's Lisp code is very stripped down, and too sketchy for use in a real production system. But it makes Meteor an excellent place to start designing UPS2 in a modern Lisp style. For a long while, ten years, this has been just a dream.


Nov 22, 2022

The Importance of Memoization in Recursive Functions

I've been recently thinking about rewriting the production system Soar in Lisp. The old Lisp version of Soar is version 5. There's some OPS5 skeletons in it, but it's evolution from OPS5 is messy; not worth the investment to fix it in my opinion.

The core of OPS5 and Soar is Charles Forgy's Rete algorithm. So in order to rewrite Soar, I was thinking about doing a clean implementation of it. Furthermore, I think I never really had a genuine feel for the complex Rete algorithm, so if I implemented a version of it myself, I'd be better off.

In "Production Matching for Large Learning Systems" by Robert Doorenbos (1995), page 12, paragraph 1, the author mentions that one of the most important features to fasten Rete is "state-saving". Another feature is "node-sharing" between productions. This is all classic memoization - a method to cache code in recursive functions. The price for conceptual simplicity is program code complexity. But I think the overall program code complexity will be less than what exists in the old Soar-5's Lisp code.

So I'm dreaming about a possible fantastic code speed up in Soar if I can get through memoizing the production system algorithm in an efficent way.


Updated: Sep 22, 2023
Aug 23, 2022

Mlisp Scoping Rules for the Set Operator

From the start of developing Mlisp, I was never sure of how I should define the set operators. Common Lisp makes the most sense to me on the usage of the set operators in initializing variables, ie., the "defconstant", and "defparameter" operators. These operators are consistent with the notion of using dynamic or global variables at the top-level.

In Scheme, any variable is initialized with the operator "define". The operator "set!" is supposed to work on modifying an existing variable. Since Scheme is supposed to be lexically scoped, the notion of having a dynamic or global variable does not really exist. Yet, you can sort of define a global variable in Scheme by defining it at the top-level. You can change a variable defined at the top-level by using the "set!" operator at the top-level, or in the procedural level below top-level. (Also note Scheme's "make-parameter" which defines a variable at the top-level.)

The set operators in Common Lisp and Scheme are consistent if you don't try to mix these features from both languages. But after all these years of mixing features of Common Lisp and Scheme, I've come to the conclusion that you cannot define consistent set operators in a simple way using the scoping rules of Mlisp.

After trying to use Chez Scheme a little more, I noticed that the "set!" operator can initialize a variable inside a "let" statement. For example,

   ----------------------------------------------
    (let ()
      (set! x 1)     ; set closure inside a let
      (printl x))
   ----------------------------------------------
works.

Mlisp uses the following set operators:

    set  (like define in Scheme)
    set* (like defparameter in CL)
    set+ (like defconstant in CL)
    set% (like defvar in CL)
    set! (like setq and setf for top-level global variables in CL)
    set= (like setq for local lexical variables in CL)
  
Mlisp set operators sort of work, but has not been perfected yet. In Chez scheme, I've defined the operators above as follows:
    (macro set+ (a . b) `(set ,a ,@b))      ; defconstant
    (macro set* (a . b) `(set ,a ,@b))      ; defparameter
    (macro set% (a . b) `(if (top-level-bound? ',a) ; defvar
                             (warning "set%" "bounded variable unchanged" ',a)
                             (set! ,a ,@b)))
    (macro set= (a . b) `(if (top-level-bound? ',a)
                             (error "set=" "top-level bounded variable" ',a)
                             (set! ,a ,@b)))
    (macro set*! (a . b) `(if (top-level-bound? ',a)
                              (set! ,a ,@b)
                              (error "set*!" "unbounded variable" ',a)))
  
The operators set%, set=, and set*! check how the variable set is bounded. This check might be inefficient if it's in a long loop, so using plain set! might be better if you're satisfied that the variables are bounded correctly.

Since I started programming using Common Lisp and then learned Scheme, I'm inclined (or prejudiced) to the Common Lisp usage of dynamic variables. In some ways, the Common Lisp's way of defining a variable, in general, is simpler, and maybe cleaner.


Jan 9, 2022

Continuations and the Unfolding of State

Continuations occur in program execution when the context for evaluation changes.


Dec 29, 2021

XLisp with Continuations was Truely Experimental

The way David Betz wrote Xscheme in 1988 was remarkable because of the simplicity and boldness of its implementation based on the concept of using full continuations. Given all the possible traps, Xscheme works remarkably well without seg-faulting.

Xscheme's scoping rules are those used in Common-lisp. So Scheme procedures or operators like define and set! do not work in the xscheme's (and hence mlisp's) let procedure. This is part of the evolutionary problems in mlisp's future.

The following is an article written by Duha and Felleisen in 1991 which is based on Alan Bawden's note that Scheme's letrec and define procedures are not "purely declarative facility". The authors of this article mention that Chez scheme implemented this as a fix which was a "non-RRRS" feature. Chez scheme is the Scheme I've decided to use in the future after I completed a review of the Scheme's I've used in the past which is descibed below.


Updated: Feb 19, 2023
Updated: Feb 7, 2023
Updated: Nov 26, 2021
Nov 21, 2021

Lisp with Threads

What impresses me about the results of benchmark below is that SBCL is fantastic at numerical processing. Chez is nice to program in also.


ref: from the benchmarks in directory mlisp/tests/threads/btree/Readme
2021-12-15
This btree benchmark, which was derived from gcbench, is a good
general test of the lisp implementation's garbage collection.
The processor (Intel i5-3330) has 4 cores.  The routine btree.l has
basically no type optimizations.

                  chez    sbcl mit-scheme  ecl      gcc
		
btree.l (21)      25s      25s     25s     121s     44s

        (21)             11.6s**        
        (21)      35s*      9s***           42s^**   9s*^
 
*   btree-chez-2.l   - chez native pthreads
**  btree-sbcl.ls    - concurrent/pthreads with option
                       "--dynamic-space-size 16384"
*** btree-sbcl-2.ls  - with option  "--dynamic-space-size 16384"
^** btree-ecl-2.ls
*^  btree-gcc-2.c    - gcc with Apache apr thread pool (openmp)

Less numerically intensive version (better for testing just GC)

                    chez    sbcl    ecl

btree-3.l (21)      7.6s*   1.9s    8.9s**    


*  btree-chez-3.l  (time for btree-3.l(21) is 15s so using chez's
                   parallel threads speeds up performance by 2x
                   for this test.
**  btree-ecl-3.ls



Oct 19, 2021

Replaced Mlisp-0.7u with Nlisp-0.4l

Depreciated the xlisp-3 version of MLisp using the simpler code based from Nlisp. There's only one version of Mlisp now. Changes to Mlisp in the future will be a change of the stack structure to use a segmented call stack (ref: Representing Control in the Presence of First-Class Continuations, by Hieb, Dybvig and Bruggeman, 1990). JIT development will continue on Mlisp.


Oct 14, 2021

Performance Benchmarks on Nlisp

So far, I've studied the tutorial on creating a toy interpreter using libgccjit, and the examples for gnu lightning and libjit. Libjit abstracts away the little details of assembly coding.

The Boyer and Btree (aka GCbench) benchmarks reveals nlisp spents up to 80% of its time in the garbage collector, GC, under heavy loads.

In the btree.l, "binarytrees:", test, (nlisp -L btree.l 16), the profiler shows nlisp spent about 25% of its time in the GC. For a larger tree traversals, (nlisp btree.l 17), the profiles shows nlisp spent greater than 80% of its time in the GC. This degradation of performance is due to the GC repeatedly trying to allocate more memory, and continually traversing long trees to mark each accessible cell. As each cell segment gets larger, it will take an exponentially longer time to complete a sweep of the unmarked cells. The compaction of vector segments did not impact performance too much as it took only 2% of the running time.


Sept 29, 2021

A Little Review on Implementing JIT for Mlisp

Now, I seem to be sliding backwards when trying to move forwards developing a design for Mlisp. What's becoming clear to me now is that I could not develop a JIT without a virtual machine. That's the reason I gave up on the old nlisp which had no byte code compiler. It would be hard to come up with a simple way to write advanced procedures without embedding sub-procedures or subroutines within higher level procedures. For example, recursion fits within an execution model using the virtual machine. The lambda function defines a way to build procedures up from the integration of smaller sub-procedures. It's almost as if a fuller lambda expansion necessarily required the dispatching function of a virtual machine.

And that's the machine. That's what all the little parenthesis, ie./eg., (*(!*(%(&^(&^%$(**)*&^).....)))) are about. Might seem obvious in retrospect, but is necessary maybe to think about now.

David Betz's stack oriented Xlisp and hence Mlisp have a lot in common with Maclisp. Maclisp had it's origin in Scott Fahlmann's Spice lisp which led to CMUCL and SBCL. I spent some time reading the old code in CMUCL this weekend. It's amazing software considering it was written in the early 1980's when the C-language was still not in wide spread use. CMUCL was designed for special computers with its own microcode. The more modern CMUCL's virtual machine was written in C with a design which is rather a standard way VM's for lisp are written today. What's incredible to me is that it uses an "implicit continuation representation" (in ref: Design of CMU Common Lisp) design in its compiler. There's a well written story on this in Gabriel and Steels's "Evolution of Lisp", section 2.8, Scheme: 1975-1980. Scheme's continuation model can actually simplify the construction of lisp compilers. My interpretation of this is that "context", especially temporal, time ordered context (or "aka" continuations) is a necessary requirement for a descriptive language. Continuations is analogous to an operating system's context switches.

CMUCL and SBCL uses the ideas for continuation in its compiler, but never implemented continuations in the language itself. This makes sense because in practical applications today, the use of continuations is still not as efficient relative to using coroutines or threads. The Ruby language implemented continuations, but now makes it available only as an installation option. Practical languages like Ocaml versus SML/NJ do not have continuations. This perfectly fits the usage of the language. But for writing languages with greater descriptive powers, parallel processing algorithms maybe required due to the associative capabilities required in a short temporal intervals characteristic of spoken languages.

In the next couple of days I'll probably continue reading "From Folklore to Fact: Comparing Implementations of Stack Continuations" and "Implementation Strategies for First-Class Continuations". This is "where the rubber meets the road" as they say. Moving forward on Mlisp's JIT would parallel work like using GNU-Lightning in speeding up Smalltalk. Trying to create a specific design plan for this has me really absorbed.


Sept 15, 2021

Selecting the JIT Tool

I've been thinking about a JIT tool for developing mlisp for about a month, and during the last couple of days it's become obvious to me that I should try GNU Lightning first. This is because Lightning was originally designed to speed up Smalltalk's context switching between objects. Smalltalk objects are the counterpart to Lisp procedures with respect to the language's internal virtual machine. Both objects and procedures require the same kind of contextual management of its arguments and values. And in many instances these kinds of language implementations uses stacks to track of objects or procedures.

Lightning explicitly refers to continuations in the section on setting up procedures (ie., trampolines, continuations and tail call optimization). This is at the core of what JIT for mlisp is about.

The challenge for me is to implement JIT efficiently so that mlisp will run the ctalk benchmark, for example, as fast as possible. Most scheme implementations don't consider fast execution of continuations critical. In my opinion, if you want to create a machine which understands language, with all the complexity of words in its context, then fast context switching with continuations is essential.

So I'm focusing my attention now on Lightning, and will focus to a lesser extend on libgccjit or luajit's dynasm for now. I don't seem to have the time to looking into the Smalltalk's implemention of Lightning, nor maybe Ruby's JIT. The horizon for languages using objects like Smalltalk and Ruby shines bright, but they're intrinsically inefficent because of their exaggerated requirement to explicitly depend on object linkages (in the form of inheritance). But as time passes, the evolution of programming languages will follow Smalltalk's original design concept, and that's the same for mlisp, in my opinion.


Sept 11, 2021

The Nbody Benchmark

                        (time in seconds)
 # of particles      chez      sbcl      luajit
 
  5000              0.15s      0.14s      0.01s
  500000            1.54s      0.22s      0.16s
  5000000           14.6s      0.98s      1.44s
  50000000           188s      8.14s      14.5s

This is an extremely interesting computationally intensive
benchmark.  SBCL and Luajit were notably robust at 50 million
particles.

The Luajit language implementation is extremely efficient. Most notable is its small size, and simplicity. It's a model implementation of a programming language for mlisp to try to follow.


Sept 9, 2021

States or Values, Ordered Sequences, and Procedures

The intuitive model that I follow for thinking about context switching using continuations comes from temporal sequences describing physical processes. For me, a simple description of a computational processes requires keeping track of a procedure's calculation within an "environment" for the procedure which includes its arguments, and also all of the data or states which can be associated its arguments. The procedure can become enfolded in the global data environment which is the context in which all computation occurs.

It's worth reading formal descriptions of using continuations to get a perspective on how wide the applications of continuation is. For example, John Reynolds' article "Definitional Interpreters for Higher-order Programming Languages" (1998), is a concrete explanation on continuations. The article provides one a really good background on continuations. This article explains relationships between "embedding continuations in values", and how procedures can occur as values.

How procedures occur as values in a programming language is a key idea.


Sept 3, 2021

Continuations

Conceptually, I like using first class or full continuations for asynchronous event handling. Using continuations is not hard if you can keep things simple. Modelling the real world events requires asynchronicity. Thus in a rather insane fashion I'm leveling anything that gets in the way of continuations. This includes exception handling with throw/catch and unwind-protect for now. This does mean I won't go back to using these special operators in some form in the future if there's a simple way to do so. (Ref: Kent Pitman's comments on unwind-protect versus continuations.)


August 25, 2021

Mlisp JIT - Now The Tedious Coding

Moving to add native compilation to Mlisp requires spending more time on the bytecode interpreter. So I'm archiving Nlisp-0.3 for now. The complexity of Mlisp-0.7's coupled continuation/gc code requires me to still strip away alot of the continuation dispatch structures. So I'm not going to develop jit for Mlisp-0.7. Instead I'm using the latest Mlisp-0.6 lisp which I'll call Nlisp-0.4 to develop jit on. This jumping between lisp versions might seem messy, but there's still too much bugs (and complexity) in the continuation/gc part of the current Mlisp-0.7 for me to develop jit on.

Currently, the opcode tracing in Nlisp-0.4 still has lots of problems. It may take a couple of weeks for me to get this working correctly. In the mean-time I've been trying to understand how libgccjit works in gccemacs. Since the jit is complex and does lots of optimizations this might be a good tool to look into. The performance gains using libgccjit on emacs is almost 4 times (not the 10x boost I was hoping for :-)).

  A good reference is the gccemacs wiki page.
     ___________                        _____________
    |_byte code_| --->  libgccjit ---> |_native code_|


August 16, 2021

The Plan for Mlisp JIT

Mlisp is currently a bytecode compiler. What we need to do is get Mlisp to translate the byte-code to machine-code, or "mcode" (versus "bcode"), on the fly. The Mlisp module make_code_string produces what is call a "byte code string" (it's also commonly referred to as a "code vector") for execution. It was called a code "string" because the Mlisp variable bcode was typed as a unsigned string, ("ucstr bcode;"), in Mlisp.

So in essence we want to change make_code_string to be a make_mcode_string. The final code object for execution would be a machine code string. This means we have to used the X86 assembler for our machine whcih runs the linux operating system. The assembler I'll use is gas. I don't know if all this will work yet because I've never done this before. But I'm laying out a plan to produce machine code vectors. I don't know how far I'll get.


August 13, 2021

JIT for Mlisp

The lisp implementations which are fast are SBCL, Clozure and Chez. I've never looked at the source code for these implementations yet because I don't have a background in JIT or just-in-time compilation. The Chicken Scheme implementation is also fast but does a static compilation from Scheme to C code. The source code for Scheme2c looked pretty messy to me, and JIT toolkits like libJIT seems as if it'll be too much work for me to try to use.

However, I feel that Mlisp might be able to run the CTAK benchmark in the speed range of Chez if it used a JIT to compile its functions or procedures. Mike Pall's luaJIT project was amazing since it boosted Lua's speed by an order of magnitude. I'm starting to feel that I should study or use the methodology of luaJIT to increase Mlisp's execution speed.

Mlisp is a compact bytecode implementation originally written by David Betz. The theoretical design philosoply is essentially very simple, but its implementation is complicated because its components parts like context execution (continuations), code evaluation and garbage collection are really tightly coupled together. But I feel there's essential beauty and capabilities in Mlisp which makes it deserve a JIT compiler for its procedures. So in the future months or years, I might be working towards this goal. I've not spent too much time on this project lately, but getting this done requires this. But I am interested in doing this.


August 5, 2021

I've hit a wall trying to work with the continuation dispatch table. So for now, I'm putting it aside. I'll probably not go back to using it; since for me, it's too complex in design. I'm continuing in the direction of using the "Context" data structure, and plain "Jmpbuf" to do the context switching.


July 22, 2021

mlisp-0.6q is depreciated and archived. nLisp (David Betz's xlisp-0.17 which did not have a bytecode compiler) uses the same basic structures as mLisp, however, is does not have continuations. The old mxLisp which grew out of xlisp-3 does a really remarkable job of implementing continuations. So the path ahead looks like I'll be trying to merge the execution context flavored of code from xlisp-2 with the continuations in xlisp-3. Currently, as has been in the past, I am sort of inspired by elegance of David Betz's implementation of continuations.


April 21, 2021

mxlisp is being depreciated. I'm stopping work on mxLisp because I'm starting to understand its code better now. I can see the evolution of mLisp into mxLisp (David Betz's xscheme-0.28 which evolved into xlisp-3.0) alot clearer now than I did a few years ago.

Moreover, since I'm also seeing how nLisp (David Betz's xlisp-0.17 which did not have a bytecode compiler) uses the same basic structures as mLisp, I've been able to understand better now how the bytecode compiler was constructed.

I don't want to maintain 3 sets of lisps so I will focusing on nLisp which has a simpler code base, but still does not have continuations. I think this is the best part and most challenging part of designing nLisp. I hope this can happen soon, within the next year. "Stay tuned."