Home-> Concepts->Notes on Time


November 20, 2001
Notes on Programming Neural Systems
Glenn Takanishi


Measuring Information as a Function of Time

Summary

This article discusses the informational content in a sequence of identical pulses occurring as a function of time. The smallest observable time interval capable of measuring the pulse is computed by iteratively determining in which half of the time interval, T, that the pulse occurred in. Using this iterative method we can construct an operational definition of how much information is contained in this signal.

This procedure applies Kolmogorov's method[1] of determining the precision or information density in a signal to computing the information of a sequence of pulses occurring in a time interval T.

Overview

The cornerstone of Kant's philosophy is the operational distinction he made between the perception of reality and the the fact that there exist an object of that perception. Kant integrated David Hume's empiricism into his metaphysics, but he did not give up completely "a prior" based knowledge. Kant is very hard to understand, but he moved us in the direction of seriously trying to understand and study how we perceive objects.

As we study how we perceive things, we learn that the underlying reality lies not only in our perception of objects, but that reality is actually wrapped up in the mechanism of perception.

From experimental physics, we know that we can only observe or measure an event in time to an accuracy limited by the uncertainty principle, ie.,

dE * dt >= h,

where dE is the energy of a signal, dt is the time duration of a signal and h is Planck's constant. A detector cannot measure an event in time with accuracy greater than dt where

dt >= h/dE.

Note, that the limit of temporal resolution separating two time dependent Gaussian pulses is the standard deviation (one-half the width at the half-height of the pulse). The temporal resolution, dt, is also related to the highest physical frequency component of the signal. Nyquist's theorem says that in order to resolve a signal of frequency f the "carrier" frequency needs to be 2f. The temporal resolution dt is equal to the carrier frequency of the signal 1/2f.


Physical Measurement of Events in Time

Generally speaking, let a signal be defined by the occurrence of some physical phenomena like the observation of a sound pulse or electromagnetic wave packet. The recording as a function of time of this signal starts with the observation of state changes in the signal detector like the change of amplitude or frequency. The end of the signal in time occurs when the detector no longer recognizes any changes in the observation of this signal.

Assume that the detector uses a computer's clock with the lowest resolution of time defined by the CPU's single clock cycle or one CPU clock tick. Assume too that we are measuring time ordered sequences of identical pulses. To accurately measure a single pulse, the clock tick, using Nyquist's Theorem, needs to be shorter then the standard deviation of the pulse width.

Note, furthermore, that the smallest physcial duration of the clock tick is limited by the uncertainty principle.

Information in a Time Interval

Let the duration of observing a pulsed signal be some arbitrary time interval T. Also assume that the occurrence of pulses is random as in a Poisson process. In the following argument, we assume that the probability of the occurrance of the event after we measured it is 1. The information in the occurrance of an event anywhere inside the time interval T is 1 bit. The key idea is that we get more information when we know where inside this interval, T, the event occurred.


         *                        
    |-------------|
    0    |        T  
         t
  

Let the informational content of this observation, that is, the knowledge that a pulse was observed in time T be I_o. Now split the the time interval in half, and select the half in which this pulse resides. The informational content of this observation has been doubled. The observable time interval has been cut in half. The information gained is 1 bit. Repeating this process gains you another bit of information, and cuts the original time interval T by a quarter.

    I = 2 * I_o,    T -> T/2

If we repeat this procedure until we reach the limit of observation in time, then, the informational content is

    I = 2M * I_o,   T -> (T/(2*m))

where M is the number divisions or splits in the time interval T. The observable time interval is T/2M. The number of observable time intervals (ticks) in a time ordered sequence follows Nyquist's theorem of doubling the sampling interval per number of pulses.

Assuming the informational content is additive for each pulse, the information content in a sequence of N pulses in the time interval T is

    I(T) ->  N * 2M * I_o.

If you have N pulses, you have to have 2*M * N time intervals in order to be able to assign a value for each pulse.

The maximum information content of the signal I occurs when you make T equal the difference between the first occurrance and last occurrance of the sequence of pulses.


    t0                tF
    |                 |
          *
    |--------|--------|
    0        |        T
            T/2


    T = tF - t0

    2M * t = T  or  2M = T/t

    f = 1/t

    2M = Tf

    I = (N*T) f


Results

The informational content of the signal in the interval [0,T] is proportional to the number of pulses occurring in this interval, and inversely proportional to the observation time of the shortest temporal duration t. This dimensionaless number for I is a measure of the informational content in a time interval obtained by subdividing the total observation time of the signal in a binary partition. The unit for measuring this information is obtained by reducing the observation time down to the smallest operational temporal unit called the "binary interval." I stress the word interval for its sense of duration.

If you, somewhat arbitrarily, standardize the observable time unit T making it 1 second, then I = N * f, and we could call this unit of measure the bint. The bint would equal the number of pulses occurring in one second measured with a large enough frequency of observation to detect the pulses. The bint is equivalent to the hertz as a measure of energy. The hertz could also be use as an explicit measure of dynamic information in the right context.

Information Transfer Rates

Nyquist's rule for the minimum information content required for the observation of a single pulse is 2f. Information rate is related to number of pulses in the signal in time T or the intrinsic binary interval dt. (~= means proportional to)

  R_max ~= log (2 N);  N = number of pulses in the time
                           interval T.

  R_max ~= log (2/dt);  1/dt describes to how much pulses
                        can be placed in a unit time
                        interval (T=1).


Measuring the Instrinic dt in Neurons

We may be able to find the physical smallest binary interval length used by neurons by examining the uncertainty principle dt = h/dE. An example of nature building a biological organ to the limitations of a physical system is the human eyes. Human eyes have evolved to resolve images up to the physical optical resolution limit. Similarily, the lower limit on pulses triggered by neurons only exist for the duration limited by dt which is define in the uncertainty principle: dt >= h/dE.


Reference

1. "Entropy and Codes" by Fleming Topsoe, http://www.math.ku.dk/~topsoe/
This article uses Kolmogorov's method of obtaining information content to define information in binary codes.