LINEAR
ANALYSIS OF THE DYNAMICS OF NEURAL MASSES
WALTER J. FREEMAN
Walter
J. Freeman Journal Article eĞReprint
Reprinted from ANNUAL REVIEW OF BIOPHYSICS AND BIOENGINEERING, Vol. 1, 1972
This work was supported by
NIMH Grant MH 06686.
cc: Creative Commons Copyright: Attribution-Share alike Licence
Original PDF File Source: http://sulcus.berkeley.edu/FreemanWWW/manuscripts/IA3/72.html This review is concerned
with the problem of how to identify and characterize masses of neurons
empirically as dynamic entities that have properties related to but distinct
from those of the component neurons. This can only be done by observing and
measuring experimentally the responses of neural masses to known stimuli. But
measurement presupposes some theoretical framework to provide the necessary
basis functions and the units. For example, a value for frequency is predicated
on the presence of a periodic wave-form; a rate constant presumes an
exponential decay; a numerical estimate of dispersion requires some
distribution function; and so forth. The purpose of this review is to suggest
that linear systems analysis can provide a very useful structure for measuring
compound neural responses and for comprehending some of the elementary
underlying dynamics. It does not define the entities nor explain what they do
or how they do it. It merely provides the basis for measurement, which is the
first step beyond uncorrelated observations and the prerequisite for testing
theories of neural masses. WHAT IS A NEURAL MASS? Granted that
nervous systems are composed of neurons, the most compelling fact about even
the simpler brains is that the numbers of neurons are extremely large. Because
the neuron is conceived as the elementary unit of neural function, and because
it is accessible to measurement by an array of microtechniques, the proper aim
of neurophysiology is the analysis of brain function in terms of the properties
of single neurons. Yet most of what is known about the operation of the brain
in relation to behavior has come from studies based on stimulation (electrical
or chemical), field potential recordings, or ablation (by selective surgery or
disease) of masses of brain tissue containing tens or hundreds of millions of
neurons. The conceptual gap between the functions of single neurons and those
numbers of neurons is still very wide. TABLE 1 - Some examples are listed on a logarithmic
scale of estimated numbers of neurons in some common preparations. From
references 1-5 and the author's unpublished data. The parts of
nervous systems accessible to experimental observation may be arranged in a
logarithmic scale, as in Table 1, which contains examples of estimated numbers
of neurons in some well-known preparations (1-3). Listed in the lower third of
the table, extending roughly from 101 to 104
neurons, are some isolatable parts of invertebrate nervous systems, including
the cardiac ganglion of the lobster (2), the eye of the horseshoe crab (5), and
the visceral ganglion of the sea-snail (4). To those might be added the
monosynaptic sensorimotor relays of the vertebrate spinal cord (6). These are
systems for which the relatively small numbers of neurons have offered hope of
analysis and understanding by models based on discrete networks of single
simulated neurons. The upper third
of the scale, 101 to 1011 neurons, is represented by
structures from the vertebrate brain, which on surgical removal or destruction
by disease leave recognizable and reproducible deficits in behavior, such as
anosmia, cortical blindness, hemiplegia, etc (7). Between these
levels is a range of numbers, 104 to 107
neurons, which are too few to leave notable behavioral deficits on ablation,
unless they comprise a projection pathway such as the optic nerve. On the other
hand the numbers are too great to conceive of modeling in terms of networks of
finite numbers of cells. This is the region over which a conceptual span is
needed, to account for the behaviorally related properties of brains in terms
of single neurons. It is the domain of neural masses. WHY USE LINEAR SYSTEMS
ANALYSIS? The neural
masses in the central numerical region can be conceived empirically as
occupying a few mm2 of cortical surface, or a few
mm3 of nuclear volume in the
brain stem or spinal cord. They have three properties of particular interest in
the present context. First, the
output of a neural mass is often accessible to measurement as a holistic event,
such as a compound action potential of a nerve trunk, a field of potential in
the volume of the mass owing to the extracellular spread and summation of
dendritic current, or some derivative event such as the strength of a muscle
contraction. For neural masses in the brain the electrical field potentials may
be the single most valuable source of information about their dynamic
properties, because they manifest the weighted instantaneous sum of
extracellular potentials generated by large numbers of neurons. The problems of
determining the locations, distributions, and active states of neurons in the
masses generating such fields have been discussed in numerous articles and
reviews (8-13), and the subject is still grossly underdeveloped. Yet it seems
undeniable that such events, when properly analyzed, provide an essential key
to the understanding of neural masses. Second, the
output of neural masses characteristically is graded over the physiological
range of function. Output is proportional to input, within limits, and the
responses to two or more inputs are additive. This set of properties was
explored and documented in quantitative detail by Sherrington (14-16) in his
studies of reflex mechanisms in the brain stem and spinal cord, which were the
immediate precursor for the analysis of the synaptic potentials of neurons by
use of the intracellular microelectrode (6, 17, 18). Sherrington described his
principle results as demonstrating the algebraic summation of excitatory and
inhibitory influences in the neurons of the spinal cord. These properties
imply that within appropriate limits of amplitude, the operations of neural
masses conform to the principle of superposition, and to that extent their
dynamics can be described by means of linear differential equations. The
principle has been directly verified for spinal motorneurons by Granit (17) and
is the basis for the quantitative analysis of the dynamics of the array of
receptor neurons comprising the eye of Limulus (19, 20). The relevance of
these properties to some aspects of transmission in the higher nervous systems
has been documented among others by Stark & Sherman (21) in the analysis of
the pupillary reflex as a servomechanism; by Lopes da Silva (22), Cleland &
Enroth-Cugell (23), Maffei (24), and Regan (25) in the analysis of visual
cortical potentials evoked by sine wave modulated light; and by Tielen et al
(26) in the observation of electrical responses of auditory cortex to
sinusoidally modulated sound. Third, neural
masses are readily accessible to electrical stimulation with extracellular
electrodes on peripheral afferent nerves or in the tracts, nuclei, and cortical
surfaces of the brain. Characteristically such stimulation activates large
numbers of axons in the vicinity of an electrode, i.e., it is a multicellular
input leading to the activation of masses of neurons. Mathematically it can be
approximated by a delta function in the time dimension although not in the
spatial dimensions of the stimulus. This form of activation has the additional
advantage of bypassing the receptors through which sinusoidal stimuli are
delivered (21-26) and for which transfer functions are difficult to obtain. These three
properties form the basis for the expectation that some of the fundamental
characteristics of neural masses might be described in terms of linear systems
analysis. Very simply, the neural mass under study is subjected to an
electrical pulse, which is superimposed on the background activity normally
present in the input pathways to all neural masses. The output field potential,
after suitable averaging to remove the background activity generated by the
mass (comprising the EEG waves of the mass), is treated as the impulse response
of the system. By use of paired
shocks or repetitive trains of stimuli (27, 28) a linear range of function is
defined. In this range the impulse response can be treated as the sum of a set
of terms, which are the solution to a set of linear differential equations
having the boundary conditions corresponding to an impulse input (29). These
terms serve to generate exponential curves and damped sine waves, which become
the basis functions (30) or elementary curves required for measurement. An
appropriate set is chosen by trial and error, and the sum is fitted by use of
nonlinear regression to the digitized averaged evoked potential (AEP) or
impulse response (31). The Laplace transform of the evaluated curve yields the
linear differential equation best describing the dynamics of the neural mass in
the designated experimental state. The combination
of electrical stimulation at some reproducible site with field potential
recording near the center of some evoked activity provides a useful empirical
description of a neural mass. Again, it does not provide a definition; it
provides a platform for observation and measurement. The most effective use of
the approach requires detailed specification of the anatomy of the stimulated
tract and the target mass; of the spatial distribution of the field potential
and its relation to the gross and microscopic structures of the generating
neurons; identification of the neuron types responsible for unit and field
potentials; and so forth. One of the seeming disadvantages of the use of evoked
potentials and linear analysis is the heavy requirement for detailed
correlative field mapping and histological measurement. On the contrary, even
with initially poor specification of stimulus and recording conditions, which
is usually the case in the beginning, it is easy to get immediate results, and
to feed these back into the experiment as a basis for improving the
specification. By successive modifications the initially hazy conception of the
neural mass under study can and should become progressively more sharply
defined and richly detailed. HOW MIGHT LINEAR ANALYSIS
BE APPLIED? The
derivation of transfer functions in this approach (thus far described) is of
very limited value, owing to the fact that the waveforms of AEPs vary with
changes in stimulus intensity. The amplitudes of responses change in linear
proportion to the input intensity, but the coefficients describing the
frequencies and exponential decay rates of the responses are often exquisitely
sensitive to changes in the input magnitude. The limitation with respect to amplitude is
further reflected in the fact that the modulation amplitude for sinusoidal
stimulation must in most cases be restricted to some fraction of a sustained
"dc" input bias, e.g. on the order of 20% for visual cortex in dogs
in response to sine wave modulated light (22). The limits are similar though
less stringent for retinal ganglion cells and peripheral receptors, e.g. up to
60% for Limulus (20). When the limits are exceeded, harmonics become prominent,
and the frequency-response curves tend to change with amplitude (32). The principal reason for this pervasive
nonlinearity in the function of neural masses lies in the limitations on output
of the neurons in the masses over a range of input amplitudes. Sherrington
described these limits (15) with the terms "facilitation"' and "occlusion."
The former was attributed to the threshold of each neuron and implied that two
stimuli together might cause a neuron to discharge, whereas either alone might
not. The latter was attributed to the refractory properties of neurons, such
that if one stimulus caused a neuron to discharge, the addition of another
stimulus too soon after the first would not augment its output. More recently the nonlinearity has been described as
bilateral saturation (22, 32-38). This is a static nonlinearity (35), which can
be described as an amplitude-dependent property of neural masses. Conceptually
the transference of the neural mass can be separated into a linear
frequency-dependent part C(s) and a nonlinear amplitude-dependent part P(V).
The latter function serves to specify the values for the coefficients of the
former over a range of input magnitudes. In this manner, using linear
differential equations having state-dependent coefficients, the linear systems
approach can be extended to cover a broad range of dynamic function of neural
masses, in so far as AEPs and their functional correlates are concerned (28,
39). Additionally,
owing to the fact that neurons in masses communicate within themselves by
dendritic currents (often but not always giving rise to EEG waves and to AEPs)
but between each other by propagated action potentials (spikes or pulses), it
is essential to observe and measure the pulse trains of representative neurons.
These are either random pulse trains of background activity or induced
responses to electrical stimulation, which are best observed in the form of
poststimulus time (PST) histograms. In either case, because observation is
restricted to one or at most a few neurons at a time, it is necessary to assume
that the performance of the neuron is representative of others in the mass,
provided the period of observation is long enough. This quasi-ergodic
hypothesis seems to work well enough for purposes of linear analysis, though
how far it can be pushed is unclear. Simultaneously recorded EEG waves and background
pulse trains on the one hand and of AEPs and PST histograms on the other
constitute the essential raw materials for the analysis of the dynamics of
neural masses. IN WHAT WAY DOES THE MASS
DIFFER FROM THE NEURON? The neural mass
is composed of neurons, and its properties have a generic resemblance to those
of single neurons. But it is not identical to them, and its properties cannot
be predicted or evaluated wholly from measurements on single neurons. This is
partly because the properties of the mass depend to some extent on
distributions of various parameters of single neurons and partly because the
properties of the mass depend on the massive connectivity of large numbers of
neurons. Both of these facets are largely inaccessible to single-unit analysis. In order to illustrate this proposition examples
will now be given of the calculation of the amplitude-dependent transference
P(V), where V is wave amplitude, and the frequency-dependent transference C(s),
where s is the Laplacian operator, for the neural masses in the olfactory bulb
and cortex of the cat and rabbit. These structures have been thoroughly studied
both anatomically (40, 41) and electrophysiologically (28, 36-39, 42-48) and
have been shown to be well suited to description using linear differential
equations (34, 36, 48). WHAT IS THE MEANING OF
FORWARD GAIN? Communication
within and between neural masses is assumed here to be solely on the basis of
synaptic transmission. This implies that in general a neural mass receives
impulses of some space-time density and transmits impulses of some differing
space-time density. Forward gain is defined as the ratio of the instantaneous
magnitudes of output and input. Because both take the form of pulse density
functions (pulses/unit time/unit area or volume) the gain is a dimensionless
factor. However, each neural mass converts its pulse density input to a
dendritic current or wave function and reconverts some spatio-temporally
transformed wave function into another pulse density function. The conversion
for each of the two stages must be described, the first in terms of the
magnitude of pulse-to-wave conversion (P-V) and the second in terms of the
magnitude of wave-to-pulse conversion (V-P). The product of the two magnitudes
is the forward gain of the mass. In general there is a central amplitude range
of function for both stages of a neural mass, in which the amplitude of output
is proportional to that of input, and is additive as well, so that the
conversions are linear. If the pulse density functions are estimated from the
pulse frequencies of single-neuron pulse trains in pulses/second (pps), and the
wave function is estimated from the amplitude of the extracellular field
potential in microvolts (µV), then (P-V) and (V-P) conversions can be described
using coefficients in units of 4V/pps and pps/µV respectively. The
product of the coefficients is the dimensionless gain. In fact neither
stage is linear over the achievable physiological range. Each conversion
undergoes progressively stronger saturation with increasing departure from the
central part of the range. The effect of saturation is to reduce the gain, so
that forward gain is an amplitude-dependent nonlinearity (36, 38). The
dependency of gain for the impulse response is on the input pulse magnitude or
on the initial response amplitude, and the initial value for the gain holds
throughout the duration of the impulse response. For this reason the
nonlinearity is static and is readily susceptible to piece-wise linear
approximation. It is then feasible to separate the frequency-dependent from the
amplitude-dependent properties, and to describe the former with linear
differential equations in time and the latter with linear differential
equations in amplitude. Owing to the remarkable degree to which superposition
holds in neural masses, these considerations are also valid for
"spontaneous" or background activity. They do not hold for the full
description of seizures or convulsive neural discharges, although the
conditions for instability leading to seizures can be described (36, 48). For (P-V)
conversion the limits are imposed by the ionic mechanisms of dendritic current
(49). The greater the membrane depolarization in response to an excitatory
input, the less the difference between membrane potential and the equilibrium
potential for the excitatory postsynaptic potential (EPSP). This reduces the
effective electromotive force for the dendritic current operating into a high
resistance current path, so the current increment is reduced proportionately
for equal increases in excitatory input. At some level of input, were it
achievable, the current would asymptotically approach a limiting value.2 2[The occasional failure of
superposition of intracellular postsynaptic potentials and the demonstration
that conductance changes in dendritic membrane are the basis for synaptic
potentials (49) has lead to the conclusion that variable membrane conductance
can operate as an amplitude-dependent voltage divider for some neuronal
geometries. This nonlinearity would give rise not only to saturation but also
to variable passive membrane rate constants. Reasons have been given elsewhere
(36) for believing that over the "physiological" range of function
these rate constants are invariant. Therefore, it is concluded that even if the
voltage divider effects can be demonstrated to hold for single neurons in some
conditions, they do not account for the saturation effects observed in neural
masses.] The same
properties hold for inhibition, except that the limit is closer to the resting
state, because the difference between resting membrane potential and the
equilibrium potential for the inhibitory postsynaptic potential (IPSP) is (from
the illustrations cited in reference 49) roughly one-sixth that for the EPSP.
The input-output curve for (P-V) conversion is therefore sigmoidal with sharper
curvature on the inhibitory side (Figure 1, upper right: this graph must be
viewed after rotation 90¡ counterclockwise, for the reason given below). FIGURE 1 (instructions for reading these graphs
should be followed upon orienting the time and amplitude abscissas in the
horizontal plane). Above. The calculated curves in the three upper frames are
the predicted input-output curves P(A) of the olfactory neural masses. The
derivatives of these curves suffice to describe the amplitude-dependent forward
gains. The triangles are the pulse probability of single neurons conditional on
the amplitude of EEG, ^P(A). Po is the mean
pulse rate. The curves are: upper leftĞ(V-P) conversion for type M cells
(Equations 17 and 18); upper middleĞ(V-P) conversion for type A cells
(Equations 17 and 18); upper leftĞ(P-V) conversion for type B cells
(rotate graph 90¡ counterclockwise, Equations 15 and 16). Below. Experimental [triangles, öP(T)] and
theoretical [curves, P(T)] pulse probabilities conditional on time lag from EEG
are shown. The EEG crest occurs at T=0. PM(T) leads the EEG; PA(T) shows a slight lag from the EEG,
though on the average it is in phase; PB(T) lags the EEG. The stage for (V-P) conversion
is similarly bounded, on the inhibitory side by the thresholds for the trigger
zones and on the excitatory side by the maximum mean sustained firing rate for
the mass, which in turn is determined by the relative refractory periods and
the hyperpolarizing and depolarizing after-potentials of the single neurons.
For single neurons the relation between pulse output rate and imposed
transmembrane current is linear or nearly so (17, 19, 32, 50) over a much wider
range of pulse rates than is expected for a neural mass. There are two reasons
for this. On the inhibitory side the thresholds or firing times for the mass
are distributed (51-53). On the excitatory side neurons can be driven to very
high pulse rates for brief periods, provided they are not challenged to fire
during a subsequent rest period (37). For a neural mass the maximum firing rate
must be computed over both active and rest periods. On these grounds the
input-output curve for (V-P) conversion must also be sigmoidal, with sharper
curvature on the inhibitory side (Figure 1, upper left). HOW CAN PULSE-TO-WAVE
CONVERSION BY DENDRITES BE QUANTIFIED? The
relationships between pulse input P and wave output V can be expressed in the
form of two first-order differential equations, which state that the magnitude
of output decreases in proportion to the rate of change in output with respect
to input between two limiting values, Vi and Ve, where Vi(<0) is the level of
extracellular wave potential corresponding to the inhibitory equilibrium
potential for the dendritic membranes of the neurons generating the wave, and Ve(>0)
is the level of extracellular wave potential corresponding to the excitatory
equilibrium potential. Both are expressed as the difference from extracellular
rest potential, which is taken as zero for background activity. The empirical
rate constants, z on the inhibitory side
(V<0) and z/rd on the excitatory side, are
in units of 1/pps, and rd
is dimensionless. The derivatives are set equal to each other at V= 0, so that The solutions to the
differential equations are where Po (P with over-bar) is the
overall mean pulse rate of the neural mass at V= 0. HOW CAN WAVE-TO-PULSE
CONVERSION BY AXONS BE QUANTIFIED? For (V-P)
conversion the rate of change in pulse output P with respect to wave input V is
likewise characterized as proportional to the output: where g and rag are respectively excitatory
and inhibitory rate constants in units of 1/µV. The solutions to the
differential equations are The value for Pmax is found by setting the
derivatives equal to each other at V= 0, and solving for Pmax: Equation 9 then becomes WHAT IS PULSE PROBABILITY
CONDITIONAL ON WAVE AMPLITUDE? The experimental
evaluation of these two sets of equations 4-5 and 8-14 (54) is based on fitting
curves generated from them to the pulse probability of single neurons in the
olfactory bulb or cortex ^P(T, A), conditional (55) on the amplitude and time
of the EEG recorded from a closely neighboring point in the bulb or cortex. The conditional
probability tables are constructed from long records of EEG amplitudes and
single-neuron pulse trains measured simultaneously at 1.0 msec intervals. Three
correlations have been made. The first is between the pulse probability of the
mitral and tufted cells in the olfactory bulb, ^Pm(T, A), and the
wave generated by the bulbar granule cells, VG(T) (40). The second
is between the pulse probability of superficial pyramidal cells in the
olfactory cortex, ^PA(T, A), and the wave generated by the same
cells, VA(T) (28, 39). The third is between the pulse probability of cortical
granule cells ^PB(T, A) and VA(T)(37). Owing to delays
between the actions of the mitral-tufted and granule cells, and between types B
and A neurons, it is essential to establish the optimum time lag T for each correlation.
This is done by asking, for each measured pulse value (0 or 1), what is the
value for wave amplitude in each of the 25 msec preceding the present value,
and for each wave amplitude, what is the pulse value in each of the 25 msec
preceding? For each time lag and amplitude value the total number of times a
pulse occurred is divided by the total number of times the wave amplitude
occurred. The resulting conditional probability ^P(T, A) is multiplied by 1000
to express it in pps. For graphic display it is divided by Po (mean pulse
rate). The EEG
amplitude probability histogram for bulb and cortex almost always conforms to a
normal density function with standard deviation a. The limits of the table,
^P(T, A), are placed at ± 3s and at ± 25 msec. The
pulse probability conditional on time, ^P(T), is determined by averaging across
values for ^P(T, A) between +1 to 3s for each value of T. The
oscillatory time courses for ^P(T) have the same frequency as the dominant peak
of the power spectrum of the EEG. The crests of ^Pm(T) lead the
crest of the EEG by about one quarter cycle (at the center line in Figure 1,
lower set); those for ^PA(T) are in phase, and those for ^PB(T)
lag by about one quarter cycle. The experimental
pulse probability conditional on amplitude, ^P(A), is taken at time of the
crest of ^Pm(T) preceding the EEG crest, at the crest of PA(T)
at the EEG crest, and at the trough of ^PB(T) preceding the EEG crest (Figure 1, upper set). The choice of
equations for the curves P(A) to fit these data is based on the premise that in
all cases the independent variable is V, owing to the fact that the
instantaneous pulse probability is indeterminate. Therefore, Equations 4 and
are solved for P as a function of V. From unpublished
theoretical results, which lie beyond the scope of this limited review, the
value for r. is 2.0, so Equations 8 and 14 are modified accordingly: The value for Po (as an estimator for To) conforms to the mean pulse
rate for each neuron (the total number of pulses divided by the number of
thousands of observations), so that only a single unspecified variable, z, suffices to fit the theoretical curves to
experimental data (Figure 1, upper left). For any neuron,
one of three conditions is assumed to hold. If the limits on (V-P) conversion
are dominant, then the asymptotes for P(A) must be horizontal, because at some
high positive or negative values of amplitude the pulse probability does not
change. If the limits on (P-V) conversion dominate the neuron, the asymptotes
are vertical, because excessive values for pulse probability are required to
yield wave amplitudes nearing the asymptotic limit. If the limits on the input
function are well within the limits for both conversions, then the relationship
between P and V is linear or nearly so. Equations 17 and 18 apply to the first
case (V-P), Equations 15 and 16 to the second (P-V), and either pair to the
third. Examples of each
type of curve are shown in Figure 1, upper row. The pattern for ^Pm(A) reflects (V-P) conversion.
That for ^PB(A) reflects (P-V) conversion.
The linear curves for ^PA(A)
are fitted with a curve P(A) from (V-P) conversion, as the converse of ^PB(A) for the same neural mass. HOW IS FORWARD GAIN
CALCULATED? The conversion
rate for each stage is given by the slope of the input-output curve for each
stage, respectively, for curves from Equations 15-16 and 17-18. For each neuron
population having two stages it is the product of the two derivatives. The
forward gain is denoted Ki¥5 for inhibition by an inhibitory neural mass and
disexcitation by an excitatory neural mass. The forward gain is Ke¥5 for excitation by an
excitatory neural mass and disinhibition by an inhibitory neural mass. The derivatives of Equations
17 and 18 are From the derivations of
Equations 15 and 16, we have Therefore The "reference gain"
is Ko at V= 0. Some
representative values of the coefficients are as follows (56). The mean value
of Po for 22 bulbar mitral and
tufted cells is 10.1 pps and the mean value for gm is .00512/µV. For 10
type A neurons the mean for Po
is 13.8 pps and that for gA is .00169/µV (5 7). In
addition to Po (9.6 pps for 12 type B
units), three coefficients are evaluated from fitting PB(A) to ^PB(A). The mean for rd is 5.7. The mean for Vi is Ğ 138 µV or -
3.09s, slightly lower than three
standard deviations of EEG amplitude. The mean for the rate constant zB is 0.41/pps. The estimated
gain factor for (V-P) conversion of mitral-tufted cells is 2gMPo=.10
pps/µV. That for (V-P) conversion type A neurons is 2gAPo
= .046/µV. That for (P-V) conversion from type B pulses to type A waves
is -zBVi = 56pps/µV. The
estimated value for Ko¥5 for the product of (P-V) and (V-P) conversion factors
for superficial pyramidal cells (A) is 2.6. The factor for (P-V) conversion in
the bulbar relay can not be estimated because bulbar-granule cells do not
generate detectable action potentials. These numerical
estimates are without confidence intervals, particularly in regard to the use
of the mean for observed values of Po as an estimator of the mean To. There is some bias in the
experimenter toward selecting relatively fast-firing neurons for statistical
analysis, because they give smoother pictures at lower cost, so that the
estimate of the type A forward gain based on To is probably about two-fold too high. The point is
that conversion factors and forward gains can be defined by theory and measured
by experiment for neural masses. Confidence limits can be established only by
extensive experimental follow-up and further molding of the theoretical
infrastructure. Equations 25-27
imply that there is a central quasi-linear amplitude range, in which forward
gain is maximal. With increasing positive (excitatory) amplitudes both stages
undergo progressive saturation, the (V-P) stage exponentially and the (P-V)
stage linearly. With decreasing negative (inhibitory) amplitudes, the (V-P)
stage saturates twice as rapidly in the exponential mode and the (P-V) stage
six times as rapidly in the linear mode on the inhibitory side as on the
excitatory side. The forward gain appears to depend on four parameters or
system variables; the population mean pulse rate Po; the ambient maintained
degree of depolarization Vi;
and the two rate constants g and z for which an interpretation at the cellular level has
not been attempted. Whether these four system variables are or are not
independent of each other has not been determined. WHAT IS THE OPEN-LOOP
RESPONSE? Neurons in
masses are densely interconnected by countless numbers of synapses. Normally
these are capable of transmitting output depending on the magnitude of input,
so that electrical stimulation of a nerve or tract leading to a neural mass
leads to multiple sequential synaptically transmitted events in the mass.
Repetitive excitation and inhibition of the initially excited or inhibited
neurons is the rule. However, by pharmacological means it is feasible to reduce
the transmission effectiveness of synapses in the mass, to the extent that
background EEG and pulse trains are totally suppressed. In this state the
afferent volley, electrically induced, activates the dendrites of neurons at
the first synapse, and perhaps the second, but no further. Feedback interaction
is reduced to zero, and only forward transmission to the first one or two
subsets of neurons in the mass is present. This is referred to as the open-loop
state (36, 37). The experimental
proof of this state depends on demonstrating the absence of background unit
activity and all but one brief volley of induced firing, if any (37). (This is
another example of the manner in which the conjoint recording of pulse and wave
activity is essential to the analysis of neural masses.) The averaged dendritic
response manifested as the AEP is the open-loop impulse response of a neural
mass. An example of
the open-loop AEP of the olfactory bulb is shown in Figure 2 (top) as the sets
of triangles. The response is induced by stimulation of the axons of the mitral
cells in the lateral olfactory tract (LOT) antidromically. The volley is
delivered by the mitral cells to the granule cells, and the dendrites on
synaptic activation generate the field potential yielding the AEP. There is no
further transmitted event. A virtually identical open-loop response occurs in
the olfactory cortex, generated by the type A neurons in response to an
orthodromic LOT volley (34, 38, 57). FIGURE 2. These are open-loop
responses of bulbar neurons on antidromic LOT (above) or orthodromic PON
(below) single-shock electrical stimulation. AEPs (triangles) N= 100. Curves
are from the inverse transforms of Equation 28 (above) and of Equation 30
(below). In essence3 these experimental data have
been fitted by the sum of three exponentials: where a1 is the rate constant of the
decay of the response, a2
is that of the rising phase, and a3 is that determining the curvature of the foot of the
response. The Laplace transform of Equation 29 gives the differential equation
for the dendritic open-loop response: 3[The experimental
determination of the open-loop rate constants is complicated by the presence of
dispersion in afferent axonal pathways, which is not part of the delays found
within the loops of the neural mass. For LOT input to the bulb and cortex, this
dispersion Ax(s) is negligible, but for PON
input it is rather strong (47). Therefore, the total transference for PON input
is Ax(s)Am(s)A(s). The forward transference within the bulb is A(s)A(s).
The separation of Ax(s) from Am(s) requires measurements in both open- and
closed-loop states, using both orthodromic (forward limb) and antidromic
(feedback limb) inputs. A simple
experimental test to determine whether Ax(s) can be ignored is to search for the afferent
axonal or cell body compound action potential in the mass preceding the
dendritic response. If it cannot be detected (other than in the form of
intracellular or extracellular unit potentials), then it has been degraded by
temporal dispersion acting as a low pass filter (47), and Ax(s)
must be explicitly evaluated in order to evaluate A(s) with adequate
precision.] Mean values for the rate
constants are a1 = 220/sec (equivalent to a
time constant of 4.55 msec), a2
= 720/sec (1.38 msec), and a3
= 2300/sec (0.43 msec). The similarity
of these rate constants to those computed from intracellular measurements on
single cells (49, 58-62) bulbar measurements on single cells (49, 58-62)
strongly suggests that a, can be taken to represent mainly the passive membrane
RC decay rate, a2 the lumped synaptic and cable
delays (both equivalent to one-dimensional diffusion processes), and as the
effect of axonal delays within the mass, which are very short (37). However,
the rate constants of the mass cannot be uniquely identified with these known
sources of delay in single neurons on a one-to-one basis. This procedure
to identify and measure the rate constants (and to interpret them in terms of
underlying processes) is formally identical to that used to define
"passive" membrane resistance and capacitance (59). For an axon or a
group of axons a range of response amplitudes is defined over which additivity
and proportionality hold. Within this range the response in membrane potential
v(t) to a current step 1. U(t) is recorded. It is fitted with a rising
exponential curve having the equation v(t) = k¥I¥U(t)¥(1-et/t) where r = 1/a is the time constant in sec and
k is in units of ohms and depends on the value for the potential as t¨´. The Laplace transform is
V(s) =I¥k/s(st+1). The transfer function is the ratio of the
transforms of the input and output functions, V(s)/(I/s)=k/(st+1). A differential equation is then written to
describe the discharge of a capacitor C through a resistor R after C has been
charged by a current pulse I¥¶(t).
This can be written Cdv(t)/dt=I¥¶(t)Ğv(t)/R.
The Laplace transform is V(s)/I=R/(sRC+1). It is next inferred that R=k and C=t/k. Numerical estimates for R and C are then obtained
in a variety of experimental conditions to confirm the dynamic range of
linearity for the preparation, which is the range of validity for the
differential equation and its evaluated coefficients. It is well known that both R
and C of biological membranes vary with frequency (60), but the approximation
of their behavior within the usual experimental range of function to ohmic
resistance and coulombic capacitance is close enough for most purposes.
Moreover, each is an average measurement over a large ensemble of membrane
structures, such as the sodium or potassium channels, which on more intensive
analysis outside the linear range are identified and measured by elaborate
equations, e.g., the Hodgkin-Huxley equations (61). Or the linear model is
retained but the lumped circuit approximation is dropped, and the spatial
dimensions are introduced, e.g., in the form of a cylinder (8, 62) or a
branching dendritic tree (8, 9, 63). In all these
cases the response waveform for a severely limited input guides the selection
of an appropriate set of linear basis functions and the construction of a linear
differential equation. The equation is modified and elaborated to extend
prediction and observation into a broader functional range. The essential
difference in the present usage is that linear equations are used to describe
and measure the properties of the neural mass, and the interpretation of the
results is directed toward known components at the next lower level of
complexity, that is, the properties of neurons rather than the properties of
membranes. On the other
hand the same results are used as the basis for interpretation of the next
highest level of complexity. An immediate example of neural masses is the AEP
response of the bulbar granule cells A(s) to excitation of the primary
olfactory nerve (PON), shown in Figure 2 (bottom). The afferent volley is
transmitted orthodromically through the mitral-tufted cell pool having the
transference Am(s). It is found
experimentally (W. J. Freeman, unpublished results) that Am(s) is
almost identical to A(s), so that the overall transfer function for the two
neural masses in series, Ag(s), is The inverse transform of
Equation 30, which contains double poles after substitution of Equation 29, is
too cumbersome to reproduce here. The predicted waveform is shown as the curve
in Figure 2 (bottom). The interpretation at the first sublevel is that the
delays introduced by the two neural masses in series are about equal to each
other, and at the second sublevel that the passive membrane decay rates for the
mitral-tufted and granule cells are equal, despite the gross differences
between virtually all other properties of the two cell types. These same rate
constants and interpretations hold for types A and B neurons in the cortex as
well (34, 36, 37, 48). WHAT ARE THE
CHARACTERISTICS OF NEGATIVE FEEDBACK? The mitral and
tufted cells in the bulb (type M) are excitatory, whereas the granule cells
(type G) are inhibitory. Excitation of type M cells by single-shock stimulation
normally leads to excitation of type G cells and to feedback inhibition of type
M cells (46). The inhibited type M cells disexcite type G cells which
disinhibit or re-excite type M cells, and so on, such that the impulse response
in the closed-loop state is predictably oscillatory. The same prediction holds
for the interaction of types A and B neutrons in the cortex (34, 36, 57). In a narrowly
limited range of function (to be described in a later section), the transfer
function for these negative feedback loops for orthodromic input to the
excitatory neural mass and output (Figure 3, upper right) from the same neural
mass (the forward limb) is For input to the forward limb
and output (Figure 3, lower right) from the feedback limb (the inhibitory
bulbar or cortical granule cells), the transference is For antidromic input to the
feedback limb and output from the forward limb (Figure 3, upper left) or output
from the feedback limb (Figure 3, lower left) the transfer functions are
respectively In each of Equations (31-34)
the feedback gain is Kn=(KeKi)¥5, where Ke¥5 and Ki¥5 are defined by Equations 19
and 20. FIGURE. 3. PST histograms (above) of a single mitral cell
and AEPs (below) of granule cells upon LOT (left) or PON (right) stimulation
are fitted with curves for predicted responses from Equation 35 (closed loop,
negative feedback). There is phase lag of the AEP from the PST histogram of
about one quarter cycle (see Figure 1, lower left). Transfer functions: upper
right, Equation 31; lower right, Equation 32; upper left, Equation 33; lower
left, Equation 34. Following
substitution of Equation 29 into any of Equations 31-34, factoring of the
denominator, and partial fraction expansion, the inverse Lapace transform
yields the equation for a damped sine wave with a sigmoidal inflection at the
foot of the first upward peak (34, 36, 48): The same equation holds for
predictions of the state variables of both forward and feedback limbs. The
frequencies and decay rates are identical, but the phase of the feedback limb
transient characteristically displays a quarter cycle phase lag from that of
the forward limb transient. These predicted
transients are shown as curves in Figure 3 fitted to the PST histogram
(triangles, above) of a mitral cell (excitatory, forward limb) and to the AEP
(triangles, below) of the granule cells (inhibitory, feedback limb). The common
frequency and decay rate are apparent, as well as the phase lag of each AEP
from the corresponding PST histogram (48). The bulbar
responses to PON stimulation (Figure 3, right), which is orthodromic and to the
forward limb, show approximately one quarter cycle phase lag over the bulbar
responses to LOT stimulation (Figure 3, left), which is antidromic and to the
feedback limb, as predicted by Equations 31-34. The comparison
of AEPs generated by type A neurons in the cortex (37) on orthodromic (LOT)
stimulation with PST histograms shows that the AEP is in phase with the
oscillation in PST histograms from type A neurons (forward limb) and leads the
oscillation in PST histograms for type B neurons (feedback limb) by about one
quarter cycle. These same phase relationships are shown in Figure 1 (lower row)
for background pulse and wave activity. The mean value
for Kn, obtained from the Laplace
transform of Equation 35 after fitting curves to the AEPs and PST histograms
for both bulb and cortex, is between 1.75 and 2.25 (36). The value for Kn, representing effective
connection density within the mass, is a property only of the mass and not of
the single neurons in the mass. The measured values for the rate constants are
the same in the open-loop and closed-loop states. These and related results
(36, 37) imply that the rate constants can be treated as invariants, and that
the amplitude-dependent nonlinearity P(V) can be introduced into the linear
equations C(s) as a variable gain coefficient. WHAT ARE THE
CHARACTERISTICS OF POSITIVE FEEDBACK? The example
given for negative feedback is based on the assumption that a neural mass
contains large numbers of excitatory and inhibitory neurons having reciprocal
connections. If the mass contains excitatory neurons maintaining significant
feedback connections with each other, a different kind of feedback loop must be
considered. This is a positive feedback loop, in which excitatory neurons
excite and re-excite each other in the mass upon initial excitation. The
pattern is familiar among physiologists as "avalanche conduction" or
the "reverberating circuit." Such loops
usually appear in neural masses having negative feedback as well, and seldom
occur in isolation. An example of the latter is to be found in the
periglomerular neurons of the olfactory bulb (40, 41, 45, 48). An illustration
of the impulse response of one of these neurons is shown in Figure 4, left. The
three sets of triangles illustrate three PST histograms at low, medium, and
high PON stimulus intensity. (The background pulse rate of the neuron, which is
constant, is indicated by the height of the baseline above the abscissa. This
gives the change in scale from which to measure the increase in pulse rate at
the crest of the response with increasing stimulus intensity.) FIGURE 4 (from Freeman 48).
Left. The PST histograms (triangles) of pulses from a single periglomerular
neuron represent the output of a neural mass having internal positive feedback.
The lowest rate constant increases with increasing response amplitude. The
change in scale for display of the histograms is reflected in the decreased
scale for the constant background activity (Equation 37). Right. AEPs from
the bulbar granule cells (concomitantly recorded field potential) show the
granule cell response to the PON impulse input (the oscillatory component) and
to the periglomerular cell input (the monotonic component). Initial response
amplitudes: 22 µV, 324 µV, 986 µV (Equations 35 and 37). In qualitative
terms the response of the neural mass to an impulse input by way of the PON is
a rapid increase in mean pulse rate above the baseline of background activity,
which then decays with a slow rate constant. When the input intensity is
increased, the induced pulse rate is augmented, but so also is the decay rate
of the response. There is no terminal overshoot. The response of
this neural mass cannot be directly detected in the form of an extracellular
field potential. The concomitantly recorded AEPs shown in Figure 4 (right)
display a sinusoidal oscillation generated by the bulbar negative feedback
loop, to which both the PON and the periglomerular neurons project. The
oscillation is superimposed on a monotonic shift in baseline, which is the
granule cell response to periglomerular input. It is not owing to a field
potential of periglomerular cells. The dynamics of this neural mass can be described in
terms of a positive-feedback loop, having input to a subset (B1) of the periglomerular cells
constituting the forward limb, interacting with another subset (B2) of the same mass not receiving
the initial impulse input and constituting the feedback limb. The transfer
function for output from the forward limb is where A(s) is defined in
Equation 29 and K, = K,, is the square of excitatory forward gain defined by
Equation 20; Ao is a forward gain constant. After
substituting Equation 29 into Equation 36, reducing fractions, and expanding
the denominator in partial fractions, the inverse Laplace transform yields where the amplitude
coefficients Bj and B5 depend on the open loop rate
constants ai, the feedback gain Ke, and a forward gain constant
Ao. The rate
constants4 are invariant with stimulus
intensity at the values a1
= 230/ sec, a2 = 550/sec, and a3 = 2300/sec. The value for Ao increases in proportion to
stimulus magnitude. The change in the closed-loop rate constants, most notably
that for the decay rate of the response b1, is owing solely to the change in Ke predicted by Equation 26,
where either Ao or B1 or the crest amplitude of the
PST histogram is used to estimate V. 4 [Owing to the inaccessibility
of a dendritic field potential from those neurons, the open-loop rate constants
were determined by trial and error. A sum of exponential curves and a damped
sine wave (Equation 37) was fitted to the data (Figure 4, left, solid curves,
and Figure 5, open triangles). The transferences for Ax(s) and A(s) were evaluated by initial guesses, and
root locus plots were calculated. When optimal values for the open-loop poles
had been found, the results were checked by generating new curves using these
invariants (Figure 4, left, dashed curves). The neural mass was found to have
the same rate constants as the other masses in the bulb and cortex, within the
limits of experimental error (34). This case illustrates a general principle in the
analysis of neural dynamics, that sets of AEPs and PST histograms, taken in
conjunction with systematic variation of an antecedent variable, can be used to
compensate for the limitations imposed by the inaccessibility of some of the
state variables of neural masses to direct measurement.] These and
related results (36, 48) show that positive feedback among neurons in masses
leads to monotonic responses with decay rates that are not the same as those of
the component neurons. The rate constants of the neural mass with internal
positive feedback have much lower values than those usually assigned to passive
membrane. Furthermore, whereas by inference the rate constants of the component
neurons in the mass are invariant with respect to response amplitude over the
designated range of observation, the rate constants of the neural mass are
amplitude-dependent, primarily owing to the presence of saturation in the
feedback path. This type of long-lasting neural response has been
observed to follow electrical stimulation in many parts of the nervous system,
most prominently in the spinal cord (64, 65), where the prolonged impulse
responses have been associated with presynaptic inhibition. However, the dependencies
of the rate constants on response amplitudes have not been measured with
adequate precision, and the associations with concomitantly recorded PST
histograms have not been well enough established, to permit strong inference
that these responses manifest positive excitatory feedback, although it seems
likely that many of them do. Mutual
inhibition can be modeled using Equations 36 and 37. An example is the lateral
eye of Limulus (19), which consists of an array of about 101 densely interconnected
receptor neurons having a common input (light) and a common sign of output
(inhibition). They form a positive inhibitory-feedback loop in which the gain
characteristic is linear over a certain range but is bounded by saturation
(threshold) on the inhibitory side. Above that level the function is readily
approximated by linear analysis (20). A similarly isolated
example of an inhibitory neural mass has not been identified in the mammalian
nervous system. Inhibitory neurons seem always to be densely connected with
excitatory neurons as well as with each other. Therefore, information about
them is indirect. Their predicted patterns of behavior correspond to those for
excitatory interactions in most aspects. But whereas the outputs for the
forward and feedback limbs of an excitatory neural mass during the impulse response
both increase and decrease together, in the inhibitory neural mass they change
in opposite directions. On initial excitation, for example, the activity of the
forward limb increases, which inhibits the activity of the feedback limb. The latter
disinhibits or further excites the forward limb, which further inhibits the
feedback limb. The time courses of the two parts are parallel and monotonic,
but have opposite polarity. WHY USE ROOT LOCUS DISPLAY? The dynamic
relation between the rate constants of the closed-loop response and the
underlying physiological variable, the closed-loop gain, is best displayed by a
root locus representation in the s plane (29, 35). Such a diagram is shown in Figure
5 for Equation 36, which on substitution of Equation 29 and reduction of
fractions becomes On expansion and factoring of
the polynomial in the denominator, this becomes where the
closed-loop rate constants bj
are determined by the open-loop rate constants ai and Ke.
The closed-loop zeroes at s= -a1
and s= -a2 appear as open squares on the
negative real axis in Figure 5, at the locations of the open-loop poles. The
poles and zeroes at s = - a3
are far to the left. Experimental closed-loop roots appear as the four sets of
triangles where the four loci intersect representative gain contours between Ke=0.53 and 0.19. For each value
of Ke there are four real roots and
two complex roots. The latter predict the overdamped cosine component in the
rise of the output. The rate constant specified by the pole b1 nearest the jw axis determines the decay rate of the impulse
response. The method
displays in a single graph the open-loop rate constants (which are invariant),
the closed-loop rate constants, the ambient level of gain (which is
amplitude-dependent), and the predicted change in the impulse response with
changes in amplitude or other determinant of gain. Also shown are the locations
of the closed-loop zeroes, which play an important role in precision analysis,
and the stability characteristics of both real and model systems. The technique is
especially well adapted for use in conjunction with impulse stimulation. Owing
to the fact that direct access to most neural masses in the brain is by nerves
and tracts, which are accessible only to electrical pulses, the impulse
response (the AEP and the PST histogram) is the most common source of
information on their active states. The curve fitted
to each impulse response yields coefficients, which form a constellation of
poles and zeroes in the s plane. Many physiological variables, but particularly
response amplitude variation, generate successive values for the coefficients,
which define sets of "physiological" root loci as indicated by the
triangles in Figure 5. These loci are matched by theoretical root loci, which
are generated from differential equations based on the topology of connections
and the open-loop rate constants of the neurons. They serve to evaluate the
closed-loop gain. The ultimate
justification for reliance on the root locus diagram (29) as a basic analytic
tool in neural dynamics is the feasibility of separation of the transference
into a linear frequency-dependent part and a nonlinear amplitude-dependent
part, the former evaluated by fixed rate constants and the latter by variable
gain coefficients. No other method of system representation seems so well
adapted to this feature of neural masses. FIGURE 5. This is a root locus
diagram in the s plane for neural positive feedback. The triangles show the
rate constants of the solid curves in Figure 4 (left). The open squares show
the locations of the open-loop poles and closed-loop zeroes. The solid curves
are the root loci. The light curves are representative gain contours. WHAT ARE THE
CHARACTERISTICS OF MULTIPLE-LOOP FEEDBACK? Doubtless the
most common neural mass by far is the mixture of excitatory and inhibitory
neurons, which are densely interconnected with each other without restriction
as to cell type. The minimum topology of such a mass contains multiple loops of
three kinds: negative, positive-excitatory, and positive-inhibitory feedback
(36). Three feedback gain coefficients are required: Ke, Ki, and Kn. A twelfth-order
differential equation is required to represent the connections of such a neural
mass, in which the open-loop transference of the subsets composing it, A(s), is
specified and evaluated by Equation 29 and in the text following. The
techniques for formulating and solving the equation have been given elsewhere (36,
57). The solution for the impulse response predicts a damped sine wave in the
output, which is superimposed on a monotonic transient (similar to that in
Figure 4, right). The oscillation is due to the negative feedback loop, and the
monotonic transient is the output of whichever of the two positive feedback
loops has the higher gain. Changing the
intensity of the stimulus delivered to a mixed. neural mass characteristically
alters the frequency w and decay rate a of its oscillatory response (36-39). Measurement of
the series of AEPs such as that shown in Figure 4, right, yields sets of values
for w and a that in the s plane define a physiological root locus for the
neural mass with changing input intensity (Figure 5, circles). The frequency is
characteristically reduced and the decay rate is augmented with increased input
magnitude. The changes imply, in accordance with Equations 25 and 26, that the
feedback gains are reduced with increasing amplitude owing to saturation. Calculations to
fit these physiological root loci are complicated by two factors. First, all
three gain coefficients must be expressed as dependents on one variable (input
or response amplitude, which has both positive and negative extremes). Second,
the input to the mass may consist not merely of an impulse, but of a prolonged
monotonic input function, such as that from periglomerular neurons to mitral
and bulbar granule cells (Figure 4). The degrees of saturation on the
excitatory and inhibitory sides and their ratio may vary widely, depending on
the magnitude of this baseline shift. An interim technique to achieve the calculation is based
on the use of an empirical dimensionless factor 5, which serves to define the
dependence of Ke and Ki on Kn: where Ko is a reference gain (see Equation
27) at which Kn = Ke = Ki. When this condition holds
(even for ¶0), the mixed neural mass is reduced to a single
negative-feedback loop, so that Ko has a value between 1.75 and 2.25. From the product of the
left-hand terms of Equations 40 and 41 set equal to the product of the right-hand
terms, Kn=(KeKi)¥5. It is found empirically (W. J.
Freeman, unpublished data) that the locus for a neural mass with an impulse
input is replicated by a value for ¶ = -0.5. For a mass with an
excitatory monotonic function or bias in the input, ¶ approaches zero.
This serves to describe the bulbar physiological root locus for PON input. For
an inhibitory bias, ¶ approaches -1. The latter value serves to describe
the physiological root locus of the olfactory cortex for LOT input (36, 39). These characteristic curves
for neural masses are shown as a family in Figure 6 for a value of Ko = 2.24. The curvilinear
segments from upper left to lower right designate values for gain expressed as the
log10(KnKo). The stability limits to the
left of the jw axis (at the high-frequency,
low-amplitude, high-gain ends of the curves) are determined by a root locus
(not shown) on the real axis of the s plane, which crosses the jw axis to the right with decreasing amplitude. The
stability characteristics at the low-frequency, high-amplitude ends of the curves
have been discussed elsewhere (36). The characteristic curves are used as
follows. The open-loop rate constants are determined for a neural mass. Then in
some normal physiological state a set of AEPs over a range of stimulus
intensities is obtained and measured. The frequencies and decay rates define
the physiological root locus. The position and orientation of the locus serve
to evaluate Ko and ¶. The values for a and w serve to evaluate Kn Equations 40 and 41 evaluate
Ke and Ki. By this means the closed-loop
responses suffice to specify the three feedback gains (functional connection
densities) in the mass. The logarithms of the gains (with reference gain at Ko) are plotted as a function of
the amplitude of the oscillatory component of the impulse response to determine
conformance of the dynamics of the mass to the type of saturation predicted by
Equations 25-27. The value for ¶ serves to predict the sign and magnitude
of the monotonic component of the impulse response (Figure 4, right). FIGURE 6. This is a composite diagram of four root loci
in the upper left quadrant of the s plane. Each heavy curve is a locus for a
single complex pole. Open circles are from an experimental set of AEPs, some of
which are reproduced in Figure 4, right. With increased stimulus
intensity the AEP frequency and the gain KN
both decrease. Corresponding values for KN
are connected by light arcs. For a single negative-feedback loop, ¶ = 0.
For both negative and positive feedback with an impulse input, ¶ = Ğ.5.
For an impulse input superimposed on an excitatory bias to a mass having both
positive and negative feedback (Figure 4, right), ¶ > Ğ.5. For inhibitory bias, ¶ < Ğ.5. The normal midrange operating
condition for the olfactory neural masses (Figure 3) appears to be ¶ near
-.5 and KN near 2.24(log10Ko= 0.35). The open loop double pole (Figure 2) is at s
= - 220/sec. Root loci on the negative real axis (as in Figure 5) are not shown
here. An interesting alternative
approach to the description of this multiple-loop neural configuration has been
developed by Wilson & Cowan (66). They derived coupled nonlinear
differential equations to predict the responses of a neural mass having
negative feedback and the two kinds of positive feedback. A single rate
constant was used for the delay [A(s), i=1], and the logistic curve was used to
represent the input-output curve (Figure 1). Using phase-plane methods and
numerical integration, they found multiple stable states as well as limit-cycle
oscillation, corresponding to the real and complex roots of Equations 35 and
37. The frequency of oscillation was also found to be a monotonic increasing
function of stimulus intensity. The intensity referred to in their model
is the sustained input to either or both subpopulations, which is equivalent
here to a background bias maintaining the mean activity level Vo at some level other than
zero. As the bias is increased their limit-cycle frequency goes monotonically
from some minimum to some maximum value, above which the oscillation is
suppressed. Experimentally (38), in bulb or cortex the bias can be increased by
use of tetanizing electrical stimulus pulse trains or decreased by
administration of pentobarbital or other anesthetic. The frequency changes in
the predicted manner (cf. Figure 12a in 66 and Figure 2 in 38). The
characteristic curves shown in Figure 6 (¶².5) hold equally for
increasing bias or for decreasing test pulse intensity (39). That is, frequency
is determined by the ratio of test pulse to bias amplitude, not by either
alone. The Wilson-Cowan
model displays some remarkable hysteresis properties not apparent from linear
analysis. On the other hand the replication of the observed
"physiological" root loci with calculated root loci (36, 39) could
not be achieved unless the approximation for the open-loop response contained
at least three poles [A(s), i=3]. It is unlikely that phase-plane methods can
readily be adapted to twelfth-order systems, to provide the means for
identification and measurement of experimental observations. Each approach has
its advantages and the interaction between linear and nonlinear analysis is
bound to be fruitful. HOW WIDELY MIGHT THESE
TECHNIQUES BE APPLIED? AEPs often seem
complex in appearance, particularly when comparisons are being made between
those from different parts of the brain. This is deceptive. When the proper
basis functions are used in conjunction with signal detection theory (30, 35),
the typical AEP can be measured, stored, and reconstructed using a rather small
set of numbers (67). The real complexity of AEPs resides in the sensitivity of
their waveforms to a host of experimental factors, including the sites of
stimulation and recording, the parameters of the input, the conditions of the
animal, and the local state of the neural mass (14, 39, 42-46). When any one of
these antecedent variables is changed, several or all of the numerical
coefficients of the basis functions change in a coordinated way (39). The
patterns of variation reveal more information about the neural mass than do the
mean values of the coefficients. This is why root locus techniques combined
with factor analysis (39, 67) and analysis of variance (68) of AEP sets are so
useful (see footnote 4). The
characteristic curves shown in Figures 5 and 6 and related curves (36) hold for
neural masses in the olfactory system over a response amplitude range from
several microvolts to several millivolts, and a response frequency range from 0
to 60 Hz and above. It is suggested that any neural mass having characteristic
frequencies in this range is likely to have dynamics closely related to those
of the olfactory system. These include such structures as the hippocampus (69),
the superior colliculus (70), the thalamus (71), and most if not an areas of
the neocortex (13, 22, 25, 26, 72, 73). The impulse responses of the
hippocampus (69) and superior colliculus (70) have been shown to conform to damped
sine waves, for which the frequency decreases with increasing input intensity.
The neocortical areas generate EEG waves in the § range (15 to 30 Hz and
above). The presence of a § frequency selective system in the visual
analyzer has been thoroughly documented by Lopes da Silva and his colleagues
(13, 22) from its responses to sine wave input. The outstanding difficulty in determining whether
the characteristic curves in Figures 5 and 6 apply also to these other
structures is experimental. The open-loop rate constants must be measured to
determine whether the values conform to those used to evaluate A(s). This in
turn requires that the transference of the input pathway Ax(s) be specified (see footnote 3).
This is not straightforward for neocortical neural masses in terms of
electrical stimulation, because the input and output axons are not physically
separated as they are in the olfactory system, so that an afferent volley is
likely to be mixed orthodromic and antidromic. Even so, it seems feasible and
should be attempted. Otherwise the interaction densities (feedback gains)
cannot be defined and evaluated. WHAT IS THE SIGNIFICANCE OF
LINEAR ANALYSIS? The application
of linear analysis does not of itself yield a theory of neural masses. It is an
empirical tool for observation, description, measurement, and prediction. It
provides the basis functions for fitting the AEPs and PST histograms; it helps
to define appropriate ranges for the input parameters; it forces consideration
of the details of topologies of interconnection; it provides the means for
using the values for frequencies and decay rates of responses in differing
physiological states to estimate the numerical magnitudes of the
interconnections; and it gives an effective framework in which to study the
relations between single neural pulse trains and dendritic currents from large
masses of cells, both evoked AEPs and "spontaneous" or background
EEG. These features
in themselves amply justify further applications of the technique to larger and
more complexly organized neural masses. But to what end? Granted that neurons
in large numbers undergo correlated changes in activity following natural or
artificial stimulation of the nervous system, does the mere fact of correlation
justify the conception of a neural mass? Does the covariance of large-scale
activity in itself have any significance for behavior, or is it epiphenomenal?
Is sensory or perceptual information conveyed in broadly distributed
spatio-temporal patterns of covariance among neurons (74-76), or in the pulse
trains of single neurons (77-79)? Might the apparent properties of neural
masses be significant merely in terms of the large-scale normalizing, scaling,
and smoothing functions required for the operations of single neurons, thus
providing the "ground" against which the "figure" is
inscribed? Are the results of neuro-electrical measurements in the conditions
prescribed by linear analysis relevant to any of these possible functions of
neural masses? Is it feasible to construct valid theories of neural information
processing, storage, and retrieval without prior working knowledge of the
properties of neural masses? These questions
cannot be answered yet, primarily because so little is known about the dynamics
of neural masses. The results briefly summarized here imply that neural masses
do exist in at least an operational sense, that they have properties distinct
from those of single neurons, that these reside in poorly understood
distributions of firing rates, thresholds, interconnections, etc, and that
these properties must be defined and measured in terms of statistical averages
of neural activity. Linear analysis
can best serve now to open a door to the experimental study of neural masses,
as it did 40 years ago to the study of membranes, axons, and dendrites of
single neurons (59). Beyond this there is the expectation that when the
essential nonlinearities of neural masses have been clarified, linear and quasi-linear
equations will be supplanted by the "real" equations, as the
nonlinear Hodgkin-Huxley equations superseded older "two-factor"'
theories (61) for axon function. However, the axolemma and the neural mass are
complex in different ways, primarily in the relative looseness of the coupling
among the component parts for the latter. The alternative possibility must be
considered seriously that matrices of linear equations may become the basic
working tools for the articulation of our knowledge of how brains work. This
point serves to emphasize how little we really know about neural masses, and
how rich the opportunities are for studies of them in both theoretical and
experimental neurophysiology. LITERATURE CITED 1. Blinkov, S.
M., Glezer, I. 1. 1968. The Human Brain in Figures and Tables. New York: Plenum 2. Bullock, T.
H., Horridge, G. 1965. Structure and Function in the Nervous Systems of
Invertebrates. San Francisco: W. H. Freeman 3. Sholl, D. A.
1956. The Organization of the Cerebral Cortex. London: Methuen 4. Coggeshall,
R. E. 1967. J. Neurophysiol. 30:1288 5. Barlow, R.
B., Jr. 1969. J. Gen. Physiol. 54:383 6. Eccles, J.
C. 1964. The Physiology of Synapses. New York: Academic 7. Gardner, E.
1964. Fundamentals of Neurology. Philadelphia: Saunders 8. Lorente de
N—, R. 1947. 1. Cell Comp. Physiol. 29:207 9. Rall, W.
1962. Ann. NY Acad. Sci. 96: 1071 10. Horowitz,
J. M., Freeman, W. J. 1968. Bull. Math. Biophys. 28:519. 11. Freeman, W.
J., Patel, H. H. 1968. Electroencephalogr. Clin. Neurophysiol. 24:444 12. Plonsey, R.
1969. Bioelectric Phenomena, Ch. 5. New York: McGrawHill 13. MacKay, D.
M., Ed. 1969. Necrosci. Res. Progr. Bull., Vol. 7, No. 3, Ch. 1, 4. Brookline,
Mass.: Neurosci. Res. Progr. 14.
Sherrington, C. S. 1906. The Integrative Action of the Nervous System. New
Haven: Yale Univ. Press 15.
Sherrington, C. S. 1929. Proc. Roy. Soc. London 105B:332 16.
Denny-Brown, D. 1940. Selected Writings of Sir Charles Sherrington. New York:
Hoeber 17. Granit, R.
1963. Progr. Brain Res. 1:23 18. Brookhart,
J. M., Kubota, K. 1963. Progr. Brain Res. 1:38 19. Hartline,
H. K., Ratliff, F. 1958. J. Gen. Physiol. 41:1049 20. Knight, B.
W., Toyoda, J., Dodge, F. A., Jr. 1970. J. Gen. Physiol. 56:421 21. Stark, L.,
Sherman, P. M. 1957. J. Neurophysiol. 20:17 22. Lopes da
Silva, F. H., van Rotterdam, A., Storm van Leeuwen, W., Tielen, A. M. 1970.
Electroencephalogr. Clin. Neurophysiol. 29:260 23. Cleland,
B., Enroth-Cugell, C. 1968. Acta Physiol. Scand. 68:365 24. Maffei, L.
1968. 1. Neurophysiol. 31: 283 25. Regan, D.
1968. Electroencephalogr. Clin. Neurophysiol. 25:231 26. Tielen, A.
M., Kamp, A., Lopes da Silva, F. H., Reneau, J. P., Storm van Leeuwen, W. 1969.
Electroencephalogr. Clin. Neurophysiol. 26:381 27. Freeman, W.
J. 1962. Exp. Neurol. 5: 477 28. Freeman, W.
J. 1963. Int. Rev. Neurobiol. 5:53 29. Harris, L.
D. 1961. Introduction to Feedback Systems. New York: Wiley 30. Huggins, W.
H. 1960. Johns Hopkins Univ. Report No. AFCRC-TN-60360 31. Freeman, W.
J. 1964. Exp. Neurol. 10: 475 32. Hermann, H.
T., Stark, L. 1963. J. Neurophysiol. 26:215 33. Houk, J.,
Simon, W. 1967. 1. Neurophysiol. 30:1466 34. Biedenbach,
M. A., Freeman, W. J. 1965. Exp. Neurol. 11:400 35. Smith, 0.
J. M. 1958. Feedback Control Systems. New York: McGrawHill 36. Freeman ,
W. J. 1967. Logistics Rev. 3:5 37. Freeman, W.
J. 1968. J. Neurophysiol. 31:337 38. Ibid. 349 39. Ibid. 1 40. Ramon
LAMBDA Cajal, S. 1955. Studies on the Cerebral Cortex (Limbic Structures),
trans. L. M. Kraft. Chicago: Year Book 41. Valverde,
F. 1965. Studies of the Piriform Lobe. Cambridge, Mass.: Harvard Univ. Press 42. Green, J.
D., Mancia, M., von Baumgarten, R. 1962. J. Neurophysiol. 25:367 43. Yamamoto,
C., Yamamoto, T., Iwama, K. 1963. 1. Neurophysiol. 26:403 44. Phillips,
C. G., Powell, T. P. S., Shepherd, G. M. 1963. J. Physiol. 168:65 45. Shepherd,
G. M. 1963. J. Physiol. 168: 101 46. Rall, W.,
Shepherd, G. M. 1968. J. Neurophysiol. 31:884 47. Freeman, W.
J. 1969. Physiologist 12: 229 48. Freeman, W.
J. 1970. In Approaches to Neural Modeling, ed. M. A. B. Brazier, D. Walter. Los
Angeles: Brain Information Service, UCLA. In press 49. Eccles, J. C. 1957. The Physiology
of Nerve Cells, Chaps. 2 (Fig. 21), 3 (Figs. 39, 45). Baltimore: JohnsHopkins 50. Granit, R.,
Kernell, D. Shortess, G. K. 1963. J. Physiol. 168:911 51. Rall, W.
1955. J. Cell. Comp. Physiol. 46:373 52. Ten Hoopen,
M., Verveen, A. A. 1963. Progr. Brain Res. 2:8 53. Calvin, W.
H., Stevens, C. F. 1968. J. Neurophysiol. 31:524 54. Freeman, W.
J. 1967. Physiologist 10: 172 55. Parzen, E.
1960. Modern Probability Theory and Its Applications, p. 60. New York: Wiley 56. Freeman, W.
J. 1971. Unpublished data 57. Freeman, W.
J. 1968. Math. Biosci. 2: 181 58. Rall, W. 1960. Exp.
Neurol. 2:503 59. Katz, B.
1939. Electric Excitation of Nerve. London: Oxford Univ. Press 60. Schwan, H.
P. 1957. In Advances in Biological and Medical Physics, ed. J. H. Lawrence, C.
A. Tobias, pp. 148-209. New York: Academic 61. Katz, B.
1966. Nerve; Muscle, and Synapse. New York: McGraw-Hill 62. Hodgkin, A.
L., Rushton, W. A. H. 1946. Proc.
Roy. Soc. London 133B:444 63. Rall, W.
1959. Exp. Neurol. 1:491 64. Eccles, J. C. 1964. The Physiology of Synapses. New York: Academic 65. Wall, P. D.
1962. J. Physiol. 164:508 66. Wilson, H.
R., Cowan, J. D. 1972. Biophys. J. 12:1 67. Freeman, W.
J. 1964. Recent Advan. Biol. Psychiat. 7:235 68. Emery, J.,
Freeman, W. J. 1969. Physiol.
Behav. 4:69 69. Horowitz,
J. M. 1972. Electroencephalogr. Clin. Neurophysiol. 32:227 70. Pickering,
S., Freeman, W. J. 1968. Exp.
Neurol. 19:127 71. Poggio, G.
P., Vierristein, L. J. 1964. J.
Neurophysiol. 27:517 72 Mimura, K.,
Sato, K. 1970. Int. J. Neurosci. 1: 75 73. Brazier, M.
A. B. 1958. The Electrical Activity of the Nervous System. New York: MacMillan 74. John, E. R.
1967. Mechanisms of Memory. New York: Academic 75. Anderson,
J. A. 1968. Kybernetik 5:113 76.
Longuet-Higgins, H. C. 1968. Proc. Roy. Soc. London. 171B:327 77. Hubel, D.
H., Wiesel, T. N. 1959. J. Physiol. 148:574 78.
Mountcastle, V. B. 1961. In Sensory Communication, ed. W. A. Rosenblith, Cbap.
22. Cambridge: MIT Press 79. Barlow, H.
B. 1969. In Information Processing in the Nervous System, ed. K. N. Liebovic,
Cbap. 11. New York: Springer-Verlag
This work is licensed under Creative Commons License: Attribution-Share alike.