Home->Computation
Computation
A neural network is made up of synchronous, communicating neurons.
These neural elements can be abstracted using Turing's automata.
Computation of information is a dynamic, time dependent process.
The neural machines must pass state information embedded in the
temporal code from one automaton to the next. The temporal code
resides in the frequency and duration of the synapses, and
these synapses occur in waves across the layers of neurons.
To really understand how neural computation occurs, we must study the
essential components that make up our neural model. The neural model
of computation is based on how we intuitively think the brain processes
information. However, we don't know how close we are to actually
realizing how the brain's neurons generates the information as a global
network that make up our consciousness or awareness.
To build a neural network I use to think that you create the standard
neural layers, and propagate a synaptic wave of synapses through these
layers. You let the neural network determine how information is formed
or created through the neural layers by letting the neurons in these
layers generate synchronous fire paths. But of course, I
now know and really appreciate now, that this is a simplistic view.
In building recurrent networks for example, I had failed to think
enough about how I was losing state information using weight matrices.
The weight matrices did not contain enough details for them to describe
the evolution of each state as a function of time, so I was losing
information as I let the state propagate in time. Using weight matrices
were fine in describing patterns that do not change in time, but was a
stone mountain blocking any development in understanding dynamic patterns.
Using weight matrices in the standard recurrent neural networks destroys
state information which is "of the essence" in automata theory.
I remember reading articles that said recurrent neural networks were
Turing complete. It's easy to say yes to this in theory, but not in
practice. Recurrent neural networks in the 1980s were built
using weight matrices. All the great feed-forward, recurrent
multi-layered networks used weight matrices. The initial success
of these networks in the early 1980s from applications in visual
pattern recognition had led the amiable Tom J. Schwarz to write in
Time magazine that "Neural networks are going to be the steam engines
of the 21st century." But the old recurrent network software in practice
were not Turing complete.
I abandoned development using matrix driven networks after 2001, quite
awhile ago now. The basic I/O model of the "black box" matrix is not
appropriate for modelling neurons in chains or circuits. The most
outstanding behavior of neurons is how they synchronize themselves
together inside of parallel circuits. The matrix model does not
intrinsically allow you to capture this effect. But I have not been
able to eliminate using matrix methods from my software toolbox because
they are so powerful in doing pattern recognition for static,
time-independent processes. And I think, since this computational
technique is so efficient, that the brain must use some form of global
array association for most image recognition problems.
A very successful state processor used in speech recognition for
example executes the hidden Markov model algorithm. Markov chains
depend on the state information from each previous time sample. The
Markov chains I use now are embedded with the wave packet description
of the synapse. For me, the wave packet model is more intuitive to
use then the first-order hidden Markov model.
The model I've continue to build upon is a combination of Turing's
computational automata using communicating state machine executing
sychronously and the wave packet model used in signal processing
application. I think of synthetic neurons as gates which pass
information in a time sensitive way. The gate functions because each
neuron, from self training, has developed a history of learning from
the network. The training process is limited by the amount of information
available to the neuron. Furthermore, the ability of neural circuits
to transfer this information in spike trains is limited by the
uncertainty principle [1].
I now use wave theory to describe the neural field as an ensemble.
There is a limit to the amount of information contained in each synapse
which can be transferred from neuron to neuron in neural circuits.
This limitation exists because you can only measure energy to an
certain finite accuracy within a finite time period. The best
dynamic, time-dependent description of energy or information we can
write for a single synapse or a spike-train is using a wave packet [2].
I'm not saying a wave packet contains all the information in
a synapsing neuron that neuro-biologists [3] would envision, but
that the description of the form [4] of the energy produced
by the neuron is innately limited.
Computation With Binary Trees
I feel like I should mention some form of concrete computational
algorithm as in computing using the binary tree. The temporal synapses
within a spike train produced by a neuron contains an ordered sequence
of spikes. Within this sequence is the computational structure which
the neuron operates with. The generation of spikes within a spike train
can be formulated by ordering the synapses using a binary tree.
A measure of synchronization which physical waves like sound waves possess
is in its similarity. Two wave packets, representing spike train signals,
which are similar in temporal duration and frequency spectrum naturally
resonate which each other according to the superposition principle of
wave mechanics. The similarity or correlation of two wave packets can
also be measured in the "algorithms" by the similarity in its ordered
sequence or binary tree structure.
References
1. The Limits of Information,
Glenn Takanishi, Nov. 2010
2.
The Gabor, wave packet, transform helps to extract parts of a signal
using a Gaussian function. You can make a collection of signal attributes
by superimposing the Gaussian function over the targeted signal
function. Then you can put this collection of little signals or
wave packets in a dictionary for lookup during pattern recognition.
Mathematically, this is called taking the convolution of the
template Gaussian function with the sampled signal.
3.
In 1999, I attended a course on the software program Neuron given
by Professors Carnevale and Hines. Most of the technical biology
never sank into my head, but I really appreciated the spirit of
the course and congeniality of the teachers.
4.
The term form is generally used to mean shape. This term, along
with structure, is also used in ancient philosophy to connotate an
object's essential "being". In the case of a neural spike train or
signal, the information contained in the synapses which is transmitted
to other neurons reside in the duration or width of the synapses.
You can also think of it this way: that the information in a
neuron is in its chirp or musical intonation. The information
in a cluster of neural circuits lies in the musical score written
on the sheet music of the orchestra. Thinking of neurons working together
synchronously places less emphasis on the importance of when each
individual neuron starts a synaptic wave train, and more significance
on neurons working continuously for long periods together. Group of
neurons in large synchronous circuits can process sensory data more
accurately as a function of time. The control our muscles to get
temporally significant responses down in the milli-second range,
for example, depends upon neurons working together in a group.
next:
Synapses