Concepts

A thought is created by the orderly synapse of neurons firing in a sequence and in parallel synchronization forming a coherent synaptic wave of energy. It is this neural energy which was released by the interaction of electrochemical synapses that gives rise to the currents of awareness we feel in our mind.
While building this model, we consider what is passed to a neuron from its neighbors, and then what information this neuron will pass along to other neighborhood neurons. We increase the complexity of this model by creating circuit loops, and assigning numbers or abstract symbols to what each neuron creates as it synapses. The abstract model is based on a hierarchical structure of objects. The lowest level object is the neuron, then there follows the cluster, layer, and network. The details of these objects are in the code in what will follow in our computer programs for "synap". Information Producing Code So here we end our rather speculative discussion on the mind, and move on to programming a computer. Here I have to reflect back on what I said with real humility. My knowledge of neural computation is limited. I have to move forward to work on programming with humility also because my skills are limited. It's really complex so it's sufficient for you to start by getting an intuitive feel for this if you're not an experienced programmer. We all start slowly, and learn from experience and trialanderror. We can start by constructing objects with production rules representing context free structured grammar, CFSG. These production rules are represented as simple lists. The head of the list represents the left hand side (LHS) of the rule set or production system. The right hand side (RHS) makes up the rest of the body of the ruleset. The geometric ideas of rewriting rulesets are easy to visualize. Explicit time dependency in the neural computation model appears when I consider using state machines describing transitions of neural states. And the recurrent architecture which feedback signals in the propagation loop implicitly allows time dependency. Using this has been a failure for me in the past because the programs were over simplified. However, the system must use the states of the past, that is "past memories" in the propagation loop. The complexity of this method has always been the biggest barrier. I used to give excuses like not enough processing speed, but real neural circuits get through this limitation. Adding capabilities of parsing CF grammars to neural clusters requires parallal processing. When you lay the recursive computation of rewriting production rules on top of many sets of neural layers, you essentially have to simulate the firing of many productions simultaneously. There is, I believe, a real limit on the information processing capacity of an individual neuron in a certain short time period; say onetenth of a second. This limit inherently affects how we can describe a small set of neurons working together to compute a problem. I think that we can describe neural computation best for a small group of neurons by using the wave packet description of neural signals. Then we can extrapolate the description of how neurons compute to larger sets of circuit clusters by using recursive methods. The formulation of these problems are only intuitive guesses at the moment for me as I try to cosy up to Nature. I've tried to get closer to explaining how I think thoughts look like in the context of the software I'm trying to develop. I'll cover a little more on how our thoughts emerge like real [1] waves in the next section. And in the section on sequences I'll start exploring the "deep mechanisms" of using software algorithms for pattern recognition applications.
1. Waves of energy are all around us. It's not an imaginary metaphore.
