Rock Garden, Ryoan-ji: japan-guide.com
2020-1-31
Computation
Computation by Term Rewriting
Let me define computation from a rather confined or narrow viewpoint.
This is a physical view of computation in which calculations takes
place to either increase or decrease order. The "control" of order
is achieved with the expenditure of energy or time on the physical
computation.
Computation is a physical process that depends on the reduction of
physical states by some force of physical action. The physical law
of entropy provides the basis for enabling ordered computation to
occur.
Let me, again, define computation specifically in a digital computer
from an even more confined viewpoint. Computation in a digital
computer is a microcosm of entropic processes in which there exists
a relationship between the state variables of the processes. This
is a key idea. In physics, you can define relationships between
the variables used in the description of the process.
Now, in the world of digital computation, you can do almost anything
your imagination can program. So you can do what "pure" functional
programmers have done to simplify and efficiently optimize their
code. By defining "closed monodial" structures or categories they
have systemized a way of using graph reduction or rewriting to
do their programming with.
The pattern presented here in this article on digital computation
is that computation is fundamentally about constructing and
deconstructing the description of processes. It's basically a
mechanistic method of simplifying a description by creating order
or tending towards "negative" order (disorder).
Whenever you're creating a comprehensive description of a process you
might try using some category theory. [1] Programmers building a
specialized database might design the framework's foundation using
category theory. The Indian grammarian Panini worked on a kind
of category theory that's closely related to programming a substitution
grammar. Panini's Karaka theory [2] is also like a logical foundation
or ontology for his substitution grammar. This makes me approach
Panini's grammar as a meta-programming functional language because
it is intrinsically transformation oriented. In the Lisp programming
language this orientation to perform tranformation in functions
takes the form of side-effects. [3]
Substitution grammar consists of term rewriting elements. More generally
speaking, computation consists of (1) the ditigal computer machine, M,
(2) the software, S, which includes the language interpreter and the
application program, and (3) the external agent, environment, or observer, O,
interacting with the machine, M, and software, S. So the system's
description, D, is composed of the set of elements {M, S, O}. These set of
elements are dependent on each other in terms of state variables in the
observer's environment, O. The substitution grammar must carry state
information in state variables within the context of the observer, O.
Ontologically, in the vocabulary of the metaworld, a substitution grammar
consists of "content" and "form", or respectively; "constituent components"
and "composition rules"; parts and relationships; variables and functions;
or terms and rules. [4] It's generally true that the greater the number of
terms, the lesser the number of relational rules between terms. [5]
Term rewriting is essentially a general programming methodology (on the
verge of being extreme) which inherently uses a reductionist algorithm. [6]
The principal problem with a reductionist methodology is that you lose
the efficiency of understanding the description, D. That is, in terms
of entropy, you tend to too much disorder away from information
compression. In human terms, you lose the aesthetic feeling of beauty
in the description.
References
[1] John Baez and Mike Stay, "Physics, Topology, Logic and Computation: A Rosetta Stone", 2009, ARXIV:0903.0340v3 Rossetta Stone
This is one of those encyclopedic articles that's good to read more than
once. It covers Feynman diagrams which is used in String theory. But
what's relevant is the coverage on category theory, a little Heyting
algebra, and especially on section 4 about computation. Section 4.3
on linear type theories ends with a discussion on term rewriting.
Most the formal theoretical stuff in this article is way above my head.
I try to intuitively sense what might be of value to me. I know there's
a treasure here because of the integrative nature of this article.
[2] In Karaka the verb is of primary importance in a phrase. The noun
or subject alone is secondary. Karaka, or more generally Sanshrit, is
a functional language. (In the mystical eastern traditions including
Zen, the seer disappears inside of action or energy. The attributes
related to the subject are merged or dissolved away into a singular kind
of action or movement which attains primacy in the mind.)
[3] Side-effects tends to be explicitly controlled in "pure" functional
languages like Miranda and Haskell.
[4] Ibid [1] (John Baez and Mike Stay), Table 1, Rosetta Stone.

In term rewriting the object is term and the morphism is the rule.
[5] Satosi Watanabe, "Pattern Recognition", 1985, p. 393
In general it is true that the larger the number of components,
the smaller the number of composition rules. But, there is an
optimum, which is obviously dictated by the total memory
capacity we need to store the entire general laws, apart from
the requirement that the entire theoretical structure be elegant
and simple. The obvserver, on the other hand, has to start his
operation by providing the specific data. The first task in
this process is to identify the components, and the next is to
find out how these components are put together. This second
part is already the beginning of the "parsing" process, which
consists of assigning appropriate composition rules to this
group of components.
It is significant to note that both Satosi Watanabe and Alan Bawden
(AIM-WP-311) refers to the notion of the "external" environment of
the description as the "observer".
[6] Ibid [1] (John Baez and Mike Stay)
The idea of a combinator is very old: in fact, it predates the
lambda calculus. Combinatory logic was born in a 1924 paper by
Schonfinkel, and was rediscovered and extensively developed by Curry
starting in 1927. In retrospect, we can see their work as a
stripped-down version of the untyped lambda calculus that completely
avoids the use of variables. Starting from a basic stock of terms
called 'combinators', the only way to build new ones is
application: we can apply any term f to any term t and
get a term f(t).