August 19, 2017, Saturday
University of Colorado at Boulder Search A to Z Campus Map University of Colorado at Boulder CU 
Search Links


MBW:Network Models: Firing Rate, Feedforward and Recurrent Models

From MathBio

Revision as of 21:03, 3 April 2013 by Annett (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Background

This chapter is a summary of Chapter 7 from a book by Peter Dayan and Laurence F. Abbott called “Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems”[1]. The book gives a mathematical basis for understanding what nervous systems do such as learning, memory, vision, neuron function and much more. While the book is divided into 3 parts, this is specifically a summary and explanation of the seventh chapter in the second part of the book.

Context/Biological Phenomenan Under Consideration

While the book covers many different aspects including plasticity, memory ecoding and decoding, this chapter in particular covers the basics of neural networks and neural modeling and has many different applications both in neuroscience research and in computational aspects such as artificial intelligence and machine learning. The summary is specifically considering some of the models that were shown and discussed in chapter 7. While the book went into much deeper length about all of these and included other models, the first three of these are built upon each other while the other model, the stochastic neural model, can be looked at as similar in many regardds but offers a much different way to look at these neural networks.

History Of Neural Network Models

Neural networks have been studdied for quite a long time now but just recently are begining to be looked at from a quanitative perspective. The first attempt at using mathematics as a basis for these models was in the 1940’s and was a simple mathematical model that used electrical circuits. The first multilayer model was made in the 1970s but not much research was done during this time due to computational limitations and various asundry reasons. Bidirectional connections between neurons and multilayered neuron models in the 1980’s sparked much interest in the field and with the increasing availability of computers, people began to research increasingly complex networks which helped push multiple other fields from neuroscience to artificial intelligence.


Types of Models

Firing Rate Models

This firing-rate model consists of two basic parts. The first is that its total synaptic input to a neuron (neuron u) depends on the firing rates of its presynaptic input neurons. The second part of this model is how the firing rate of post synaptic neuron (output v) is affected by the total synaptic input.

project2_2.gif


Figure 1.
 This Figure is an example of feed forward inputs u to the output v with synaptic weights w. Taken from Theoretical Neuroscience by Dayan & Abbott.


Parameters:

  • project2_3.gif is the total synaptic current
  • project2_5.gif is the number of synaptic inputs (input u)
  • Synaptic inputs labeled by b=1, 2, … , project2_5.gif
  • Firing rate of input b is project2_6.gif (input vector u)
  • Synaptic weight is denoted as project2_7.gif (note that for excitatory synapses project2_8.gif and for inhibitory synapses project2_9.gif)
  • synaptic kernel project2_10.gif
  • Current in output neuron at time t is project2_11.gif
  • the neural response function project2_12.gif
  • project2_13.gif is the firing rate function.


Assumptions:

  • Effects of synaptic spikes sum linearly with large enough sample.
  • The synaptic kernel most often used is an exponential:
*project2_14.gif


Derivation of Firing-Rate Model:

Total current at time t with spikes at presynaptic input b at times project2_15.gif is given by:


project2_16.gif

with no nonlinear interaction between different synaptic currents the equation becomes

project2_17.gif

We must now replace the neural response function project2_18.gif in equation (3) with, the firing rate of input b, project2_19.gif. Therefore (3) becomes:

project2_20.gif

Taking the derivative of (4) with respect to t, and rearranging such that we are using the synaptic kernel equation from (1), we get:

project2_21.gif

with sum of project2_22.gif expressed as a dot product. Since (4) determines the current entering a post synaptic neuron, we must now determine the post synaptic firing rate from project2_23.gif

The firing rate equation is derived from this using, a function F(u·w) where this is a function using a threshold and a half-wave rectification (i.e. the neuron must reach some activation potential for it to go above x-axis) and is the steady state output firing rate, becomes:

project2_24.gif

using experimental and computational results though, it can be shown that this model (6) does not provide a completely accurate prediction of firing rate for all levels of current input and and thus a more complicated model is needed. An example of a more complicated model would be either a Feedforward Network or a Recurrent Network which is shown graphically by:

project2_25.gif


Figure 2
Figure A is a feedforward network. Figure B is a recurrent Network. Taken from Theoretical Neuroscience by Dayan & Abbott.


Feedforward Networks

A feedforward network has two layers of neurons and are connected by a matrix W. A feedforward model is only slithly more complicated (in that it is basically built upon the basic firing-rate model) and is arguably one of the most simple neural networks in that it does not include any loops or “backwords” directed synapses. Because of this, it is often not very practical in of itself to model anything but is useful as a basis to build other more complicated models upon that include loops and cycles such as with recurrent networks and excitatory-inhibitory networks. Using the formulation as before but with a matrix W rather than vector w to account for multiple outputs, the feed forward model can be written as:

project2_26.gif

Now W represents the matrix of synaptic weights and has components project2_27.gif which represent the strength from some input b to some output a.

Recurrent Networks

In a recurrent network, there are two layers of neurons as well but the neurons in the output are interconnected with a matrix M. An output neuron a connected to some other element a’ is denoted as matrix element project2_28.gif.

The model for the recurrent model is:

project2_29.gif


These recurrent networks are also capable of forming patterns when they receive a complex input which is not as possible with feedforward networks. These patterns can further be interpreted in a number of ways such as memory patterns and memory capacity. These different applications of recurrent neural networks have far reaching implications.

Under Dale’s law, it is stated that a neuron cannot excite some postsynaptic neurons while inhibiting others, which means that the sign of all project2_30.gif must be the same. Because of this, excitatory and inhibitory must be described differently and the book gives a set of equations which are respectively:

project2_31.gif and project2_32.gif

Network Stability

For a network that is experiencing constant input and in a steady state with dv/dt = 0, then the network is said to be exhibiting fixed-point behavior. While this is mostly what we have been dealing with so far, there are many different other physical events such as network oscillations and chaotic networks. Since under most conditions, a steady state will inevitably be met, a Lyapunov Function[2] can be used to prove the stability of an ordinary differential equation. The Lyapunov function is increadily useful and essential to certain aspects of control and stability theory[3].

Stochastic Networks

This section will deal with a Boltzmann machine in which the input-output relationship is stochastic.

project2_34.gif

Figure 3.
 Image of a Boltzmann Machine[4] with some of the edges Labeled. Taken from Wikipedia page for Boltzmann machine.

Neurons are treated as binary for this such that, project2_37.gif when unit a is active at time t and likewise project2_38.gif when unit a is inactive for time t. This binary ability, while it does not account for total synaptic current, it is possible to have incredibly interesting stochastic models that can return patters from somewhat chaotic inputs.


The each discrete time, a unit is randomly chosen to be probabilistically updated to 1 with probability:

project2_39.gif

Likewise,

project2_40.gif


Here F has the property that as project2_41.gif gets larger, it is more likely to take the value 1. Because of this, the state of this network becomes a Markov chain, which means that as the network evolves, there will be random sequences of variables that emerge, v(t + Δt) that only rely on v(t). In other words, the previous history of the model is somewhat irrelevant when looking at the model at some precise moment in time.


From equation (11), we can use Glauber dynamics to show that v does not converge to fixed point but can be described using a probability distribution function from an energy function, the energy function being: E(v) = -h·v - project2_42.gifv·M·v. Once this stochastic network has reached its equilibrium state, the probability distribution becomes:


project2_43.gif

This is called the Boltzmann distribution and the precise derivation can be followed along on wikipedia here[5].


Boltzmann Distribution / Boltzmann Machine

Boltzmann machine is a stochastic recurrent neural network created by G. Hinton and T. Sejnowski and is named after Boltzmann distribution. The counterpart to the Boltzmann distribution is a Hopfield network which is another form of a recurrent neural network. All of these different stochastic neural network models are frequently used in computational applications of machine learning.

Toy Model of a Recurrent Network

Creating a Toy Model of a Neural Network in Matlab is somewhat simple as there a models already set up. Here is an example of a very simple feed forward network and we are already able to see the possibilities that even a simple Neural network:

% tutorial followed from:
% http://matlabbyexamples.blogspot.com/2011/03/starting-with-neural-network-in-matlab.html
% Let the model have three inputs and one output
% 2a + 3b + 6bc
a= rand(1,1000);
b=rand(1,1000);
c=rand(1,1000);
n=rand(1,1000)*0.05;  
% n will be out noise to make it more realistic

y=2*a+b.*c+7*c+n;

%input matrix
I = [a; b; c];

% matrix r will show the minimum of inputs in 1st column and max in 2nd column
% v vector is final transfored vector from some previous layer u
% v = sum(w.*v) + b

S= [5 1];
R=[0 1; 0 1 ; 0 1];


% there is a built in feedforward neural network function in matlab
% this function is called newff
% net = newff(P,T,S) takes 
% P  - RxQ1 matrix of Q1 representative R-element input vectors.
% T  - SNxQ2 matrix of Q2 representative SN-element target vectors.
% Si - Sizes of N-1 hidden layers, S1 to S(N-1), default = [].
% (Output layer size SN is determined from T.)
% and returns an N layer feed-forward backprop network.

net = newff([0 1;0 1 ;0 1],[4 1], 2);

% this will be the training set

net=train(net,I,y);

O1=sim(net,I);
subplot 211
plot(1:1000,y,1:1000,O1);
subplot 212
scatter(y,O1)

% test against training set with a =2, b=2, c=2
y1=sim(net,[2 2 2]')

Output: y1 = 22.9371

References and Further Links

  1. Chapter 7: http://www.gatsby.ucl.ac.uk/~dayan/book/ch7.pdf by Peter Dayan, Laurence F. Abbott.
  2. http://en.wikipedia.org/wiki/Lyapunov_function
  3. http://www.stanford.edu/class/ee363/lectures/lyap.pdf
  4. http://en.wikipedia.org/wiki/Boltzmann_machine
  5. http://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_statistics

Wikipedia Pages to Recurrent and Feedforward Neural Networks:

Recurrant Neural Network [http://en.wikipedia.org/wiki/Feedforward_neural_networks Feedforward Nueral Networks|

History of Neural Netoworks: http://www-cs-faculty.stanford.edu/~eroberts/courses/soco/projects/neural-networks/History/history1.html