Nodal computation approximations in asynchronous cognitive models
 James K Peterson^{1}Email author
DOI: 10.1186/s404690150004y
© Peterson; licensee Springer. 2015
Received: 4 September 2014
Accepted: 6 April 2015
Published: 7 July 2015
Abstract
Background
We are interested in an asynchronous graph based model, \(\boldsymbol {\mathcal {G}(N,E)}\) of cognition or cognitive dysfunction, where the nodes N provide computation at the neuron level and the edges E _{ i→j } between nodes N _{ i } and node N _{ j } specify internode calculation.
Methods
We discuss how to improve update and evaluation needs for fast calculation using approximations of neural processing for first and second messenger systems as well as the axonal pulse of a neuron.
Results
These approximations give rise to a low memory footprint profile for implementation on multicore platforms using functional programming languages such as Erlang, Clojure and Haskell when we have no shared memory and all states are immutable.
Conclusions
The implementation of cognitive models using these tools on such platforms will allow the possibility of fully realizable lesion and longitudinal studies.
Keywords
Cognition models Graphs of computational nodes Nodal computation approximationBackground
where I _{ i } is a possible external input, \(\boldsymbol {\mathcal B}(i)\) is the list of nodes which connect to the input side of node N _{ i } and σ _{ i }(t) is the function which processes the inputs to the node into outputs. This processing function is mutable over time t because second messenger systems are altering how information is processing each time tick. Hence, our model consists of a graph which captures the connectivity or topology of the brain model on top of which is laid the instructions for information processing via the time dependent node and edge processing functions. A simple look at edge processing shows the nodal output which is perhaps an action potential which is transferred without change to a synaptic connection where it initiates a spike in C a ^{+2} ions which results in neurotransmitter release. The efficacy of this release depends on many things, but we can focus on four: r _{ u }(i,j), the rate of reuptake of neurotransmitter in the connection between node N _{ i } and node N _{ j }; the neurotransmitter is destroyed via an appropriate oxidase at the rate r _{ d }(i,j); the rate of neurotransmitter release, r _{ r }(i,j) and the density of the neurotransmitter receptor, n _{ d }(i,j). The triple (r _{ u }(i,j),r _{ d }(i,j),r _{ r }(i,j))≡T(i,j) determines a net increase or decrease of neurotransmitter concentration between the two nodes: r _{ r }(i,j)−r _{ u }(i,j)−r _{ d }(i,j)≡r _{ net }(i,j). The efficacy of a connection between nodes is then proportional to the product W _{ i,j }=r _{ net }(i,j)×n _{ d }(i,j). Hence, each triple is a determining signature for a given neurotransmitter and the effectiveness of the neurotransmitter is proportional to the new neurotransmitter flow times the available receptor density. A very simple version of this is to simply assign the value of the edge processing function E _{ j→i } to be the weight W _{ i,j } as is standard in a simple connectionist architecture. We want to be more sophisticated than this and therefore want to allow our nodal processing functions to approximate the effects of both first and second messenger systems; consult (Peterson 2014a) for more detail.
Cellular triggers
Now consider the transcriptional control of a regulatory molecule such as N F κ B (which plays a role in immune system response) or a neurotransmitter. We can call this a trigger and denote it by T _{0}. This mechanism is discussed in a semiabstract way in (Gerhart and Kirschner 1997); we will discuss even more abstractly. Consider a trigger T _{0} which activates a cell surface receptor. Inside the cell, there are always protein kinases that can be activated in a variety of ways. Here we denote a protein kinase by the symbol PK. A common mechanism for such an activation is to add to PK another protein subunit to form the complex . This chain of events looks like this: where CSR denotes a cell surface receptor. then acts to phosphorylate another protein. The cell is filled with large amounts of a transcription factor we will denote by T _{1} and an inhibitory protein for T _{1} we label as \(T_{1}^{\sim }\). This symbol, \(T_{1}^{\sim }\), denotes the complement or anti version of T _{1}. In the cell, T _{1} and \(T_{1}^{\sim }\) are generally joined together in a complex denoted by \(T_{1}/T_{1}^{\sim }\). The addition of \(T_{1}^{\sim }\) to T _{1} prevents T _{1} from being able to access the genome in the nucleus to transcribe its target protein.
The trigger T _{0} activates our protein kinase PK to . The activated is used to add a phosphate to \(T_{1}^{\sim }\). This is called phosphorylation. Hence, where \(T_{1}^{\sim } P\) denotes the phosphorylated version of \(T_{1}^{\sim }\). Since T _{1} is bound into the complex \(T_{1}/T_{1}^{\sim }\), we actually have In the cell, there is always present a collection of proteins which tend to bond with the phosphorylated form \(T_{1}^{\sim } P\). Such a system is called a tagging system. The protein used by the tagging system is denoted by and usually a chain of n such proteins is glued together to form a polymer . The tagging system creates the new complex . This gives the following event tree at this point:
Also, inside the cell, the tagging system coexists with a complimentary system whose function is to destroy or remove the tagged complexes. Hence, the combined system \(\text {Tag} \leftarrow \rightarrow \text {Remove} \: \rightarrow \: T_{1}/T_{1}^{\sim } P\) is a regulatory mechanism which allows the transcription factor T _{1} to be freed from its bound state \(T_{1}/T_{1}^{\sim }\) so that it can perform its function of protein transcription in the genome. The removal system is specific to molecules; hence although it functions on , it would work just as well on where Q is any other tagged protein. We will denote the removal system which destroys tagged proteins Q from a substrate S by the symbol . This symbol means the system acts on units and outputs S via mechanism f. Note the details of the mechanism f are largely irrelevant here. Thus, we have the reaction which releases T _{1} into the cytoplasm. The full event chain is thus:

T _{1} does not exist in a free state; instead, it is always bound into the complex \(T_{1}/ T_{1}^{\sim }\) and hence can’t be activated until the \(T_{1}^{\sim }\) is removed.

Any of the steps required to remove \(T_{1}^{\sim }\) can be blocked effectively killing transcription:

phosphorylation of \(T_{1}^{\sim }\) into \(T_{1}^{\sim } P\) is needed so that tagging can occur. So anything that blocks the phosphorylation step will also block transcription.

Anything that blocks the tagging of the phosphorylated \(T_{1}^{\sim } P\) will thus block transcription.

Anything that stops the removal mechanism will also block transcription.

The steps above can be used therefore to further regulate the transcription of T _{1} into the protein P(T _{1}). Let \(T_{0}^{\prime }\), \(T_{0}^{\prime \prime }\) and \(T_{0}^{\prime \prime \prime }\) be inhibitors of the steps above. These inhibitory proteins can themselves be regulated via triggers through mechanisms just like the ones we are discussing. In fact, P(T _{1}) could itself serve as an inhibitory trigger  i.e. as any one of the inhibitors \(T_{0}^{\prime }\), \(T_{0}^{\prime \prime }\) and \(T_{0}^{\prime \prime \prime }\). Our theoretical pathway is now: where the step i, step ii and step iii can be inhibited as shown below: Note we have expanded to a system of four triggers which effect the outcome of P(T _{1}). Also, note that step i is a phosphorylation step. Now, let’s refine our analysis a bit more. Usually, reactions are paired: we typically have the competing reactions Hence, we can imagine that step i is a system which is in dynamic equilibrium. The amount of \(T_{1}/T_{1}^{\sim } P\) formed and destroyed forms a stable loop with no net \(T_{1}/T_{1}^{\sim } P\) formed. The trigger T _{0} introduces additional into this stable loop and thereby effects the net production of \(T_{1}/T_{1}^{\sim } P\). Thus, a new trigger \(T_{0}^{\prime }\) could profoundly effect phosphorylation of \(T_{1}^{\sim }\) and hence production of P(T _{1}). We can see from the above comments that very fine control of P(T _{1}) production can be achieved if we think of each step as a dynamical system in flux equilibrium. Note our discussion above is a first step towards thinking of this mechanism in terms of interacting objects.
Dynamical loop details
Equation 9 defines the equilibrium concentrations of \(\left [T_{1}/T_{1}^{\sim } P\right ]_{e}\) and . Now if \(\left [T_{1}/T_{1}^{\sim } P\right ]\) increased to \(\left [T_{1}/T_{1}^{\sim } P\right ] \: + \: \delta _{T_{1}/T_{1}^{\sim } P}\), the percentage increase would be \(100 \: \biggl (1 \: + \: \frac {\delta _{T_{1}/T_{1}^{\sim } P}}{[T_{1}/T_{1}^{\sim } P]_{e}} \biggr)^{2}\). If the increase in \([T_{1}/T_{1}^{\sim } P]\) is due to step i, we know
For convenience, let’s define the relative change in a variable x as \(r_{x} \: = \: \frac {\delta _{x}}{x}\). Thus, we can write
which allows us to recast the change in \(\left [T_{1}/T_{1}^{\sim }\right ]\) equation as
This dynamical loop can be analyzed just as we did in step ii. We see
and the triggered increase in by induces the relative change
We can therefore clearly see the multiplier effects of trigger T _{0} on protein production T _{1} which, of course, also determines changes in the production of P(T _{1}).
The mechanism by which the trigger T _{0} creates activated kinase can be complex; in general, each unit of T _{0} creates λ units of where λ is quite large – perhaps 10,000 or more times the base level of . Hence, if and , we have
for β>>1. From this quick analysis, we can clearly see the potentially explosive effect changes in T _{0} can have on . Let us note two every important points now: there is richness to this pathway and the target P(T _{1}) can alter hardware or software easily. If P(T _{1}) was a K ^{+} voltage activated gate, then we see an increase of \(\delta _{T_{1}}\) (assuming 1−1 conversion of T _{1} to P(T _{1})) in the concentration of K ^{+} gates. This corresponds to a change in the characteristics of the axonal pulse. Similarly, P(T _{1}) could create N a ^{+} gates thereby creating change in axonal pulse characteristics. P(T _{1}) could also create other proteins whose impact on the axonal pulse is through indirect means such as the kill \(T_{0}^{\prime }\) etc. pathways. There is also the positive feedback pathway via through the T _{0} receptor creation. Note that all of these pathways are essentially modeled by this primitive.
Second messengers
We denote the complex formed by the binding of E _{1} and T _{1} by E _{1}/T _{1}. From Figure 2, we see that the proportion of T _{1} that binds to the genome (DNA) and initiates protein creation P(T _{1}) is thus s r T _{1}.
The protein created, P(T _{1}), could be many things. Here, let us assume that P(T _{1}) is a sodium, Na ^{+}, gate. Thus, our high level model is s E _{1}/r T _{1} + DNA→Na^{+} gate We therefore increase the concentration of Na ^{+} gates, [N a ^{+}] thereby creating an increases in the sodium conductance, g _{ Na }. The standard Hodgkin  Huxley conductance model (details are in (Peterson 2014c)) is given by \(g_{\textit {Na}}(t,V) = g_{\textit {Na}}^{max} \mathcal {M}_{\textit {Na}}^{p}(t,v) \mathcal {H}_{\textit {Na}}^{q}(t,V)\) where t is time and V is membrane voltage. The variables \(\mathcal {M}_{\textit {Na}}\) and \(\mathcal {H}_{\textit {Na}}\) are the activation and inactivation functions for the sodium gate with p and q appropriate positive powers. Finally, \(g_{\textit {Na}}^{max}\) is the maximum conductance possible. These models generate \(\mathcal {M}_{\textit {Na}}\) and \(\mathcal {H}_{\textit {Na}}\) values in the range (0,1) and hence, \(0 \: \leq \: g_{\textit {Na}}(t,V) \: \leq \: g_{\textit {Na}}^{max}\).
We can model the choice process, r T _{1} or (1−r)B _{1}/T _{1} via a simple sigmoid, \(f(x) = 0.5 \left (1 + \tanh \left (\frac {xx_{0}}{g}\right) \right)\) where the transition rate at x _{0} is \(f^{\prime }(x_{0}) = \frac {1}{2g}\). Hence, the “gain” of the transition can be adjusted by changing the value of g. We assume g is positive. This function can be interpreted as switching from of “low” state 0 to a high state 1 at speed \(\frac {1}{2g}\). Now the function h=r f provides an output in (r,∞). If x is larger than the threshold x _{0}, h rapidly transitions to a high state r. On the other hand, if x is below threshold, the output remains near the low state 0.
We assume the trigger T _{0} does not activate the port P unless its concentrations is past some threshold [ T _{0}]_{ b } where [ T _{0}]_{ b } denotes the base concentration. Hence, we can model the port activity by \(h_{p}([\!T_{0}]) = \frac {r}{2} \left (~ 1 + \tanh \left (\frac {[T_{0}][T_{0}]_{b}}{g_{p}}\right) ~\right)\) where the two shaping parameters g _{ p } (transition rate) and [ T _{0}]_{ b } (threshold) must be chosen. We can thus model the schematic of Figure 1 as h _{ p }([ T _{0}]) [ T _{1}]_{ n } where [ T _{1}]_{ n } is the nominal concentration of the induced trigger T _{1}. In a similar way, we let \(h_{e}(x) = \frac {s}{2}\left (~ 1 + \tanh \left (\frac {xx_{0}}{g_{e}}\right) ~\right)\) Thus, for x = h _{ p }([ T _{0}]) [ T _{1}]_{ n }, we have h _{ e } is a switch from 0 to s. Note that 0 ≤ x ≤ r[ T _{1}]_{ n } and so if h _{ p }([ T _{0}]) [ T _{1}]_{ n } is close to r[ T _{1}]_{ n }, h _{ e } is approximately s. Further, if h _{ p }([ T _{0}]) [ T _{1}]_{ n } is small, we will have h _{ e } is close to 0. This suggests a threshold value for h _{ e } of \(\frac {r[T_{1}]_{n}}{2}\). We conclude
for appropriate values of p and q within a standard Hodgkin  Huxley model.
Next, if we assume a modulatory agent acts as a trigger T _{0} as described above, we can generate action potential pulses using the standard Hodgkin  Huxley model for a large variety of critical sodium trigger shaping parameters. We label these with a N a to indicate their dependence on the sodium second messenger trigger. \(\left [ r^{Na}, {[\!T_{0}]_{b}}^{Na}, g^{Na}_{p}, s^{Na}, g^{Na}_{e}, e^{Na}, g_{\textit {Na}}, \delta _{\textit {Na}} \right ]^{\prime }\) We can follow the procedure outlined in this section for a variety of triggers. We therefore can add a potassium gate trigger with shaping parameters \(\left [r^{K},{[\!T_{0}]^{K}}_{b},{g^{K}_{p}}, s^{K},{g^{K}_{e}}, e^{K}, g_{K}, \delta _{K}\right ]^{\prime }\).
Concatenated sigmoid transitions:
A graphic computation model

B _{1}: thereby increasing T _{1} binding

T _{1}: thereby increasing r T _{1} and P(T _{1})

E _{1} thereby increasing P(T _{1}).
and so forth. We can also specialize this to the case of Ca ^{++} triggers, but we will not do so here.
Let’s specialize our discussion to the case of a neurotransmitter trigger. When two cells interact via a synaptic interface, the electrical signal in the presynaptic cell triggers a release of a neurotransmitter (NT) from the presynapse which crosses the synaptic cleft and then by docking to a port on the post cell, initiates a postsynaptic cellular response. The general presynaptic mechanism consists of several key elements: one, NT synthesis machinery so the NT can be made locally; two, receptors for NT uptake and regulation; three, enzymes that package the NT into vesicles in the presynapse membrane for delivery to the cleft. There are two general presynaptic types: monoamine and peptide. In the monoamine case, all three elements for the precell response are first manufactured in the precell using instructions contained in the precell’s genome and shipped to the presynapse. Hence, the monoamine presynapse does not require further instructions from the precell genome and response is therefore fast. The peptide presynapse can only manufacture a peptide neurotransmitter in the precell genome; if a peptide neurotransmitter is needed, there is a lag in response time. Also, in the peptide case, there is no reuptake pump so peptide NT can’t be reused.

It passes through the gate, entering the interior of the dendrite. It then forms a complex, \(\hat {\zeta }\).

Inside the postdendrite, \(\hat {\zeta }\) influences the passage of ions through the cable wall. For example, it may increase the passage of N a ^{+} through the membrane of the cable thereby initiating an ESP. It could also influence the formation of a calcium current, an increase in K+ and so forth.

The influence via \(\hat {\zeta }\) can be that of a second messenger trigger.
Each neuron creates a brew of neurotransmitters specific to its type. A trigger of type T _{0} can thus influence the production of neurotransmitters with concomitant changes in postneuron activity.
Methods
The abstract neuron model
There would be a similar set of equations for potassium. Finally, neurotransmitters and other second messenger triggers have delayed effects in general. So if the trigger T _{0} binds with a port P at time t _{0}, the changes in protein levels P(T _{1}) might also need to be delayed by a factor τ ^{ ζ }.
Abstract neuron design

The interval [t _{0},t _{1}] is the duration of the rise phase. This interval can be altered or modulated by neurotransmitter activity on the nerve cell’s membrane as well as second messenger signaling from within the cell.

The height of the pulse, V _{1}, is an important indicator of excitation.

The time interval between the highest activation level, V _{1} and the lowest, V _{3}, is closely related to spiking interval. This time interval, [t _{1},t _{3}], is also amenable to alteration via neurotransmitter input.

The height of the depolarizing pulse, V _{4}, helps determine how long it takes for the neuron to reestablish its reference voltage, V _{0}.

The neuron voltage takes time to reach reference voltage after a spike. This is the time interval by the interval [t _{3},∞].

The exponential rate of increase in the time interval [t _{3},∞] is also very important to the regaining of nominal neuron electrophysiological characteristics.
We have shown the BFV captures the characteristics of the output pulse well enough to classify neurotransmitter inputs on the basis of how they change the BFV (Peterson and Khan 2006) and we will now use modulations of the BFV induced by second messengers in our nodal computations. The feature vector output of a neural object is due to the cumulative effect of second messenger signaling to the genome of this object which influences the action potential and thus feature vector of the object by altering its complicated mixture of ligand and voltage activated ion gates, enzymes and so forth. We can then define an algebra of output interactions that we can use in building the models. We motivate our approach using the basic Hodgkin  Huxley model which depends on a large number of parameters. Of course, more sophisticated action potential models can be used, but the standard two ion gate Hodgkin  Huxley model is sufficient for our needs here.
Using the vector ξ from Equation 12, we can construct the BVF. Note for the sigmoid tail model, we have \(V_{m}^{\prime } (t_{3}) = (V_{4}  V_{3}) \: \: g\) and we can approximate \(V_{m}^{\prime } (t_{3})\) by a standard finite difference. We pick a data point (t _{5},V _{5}) that occurs after the minimum – typically we use the voltage value at the time t _{5} that is 5 time steps downstream from the minimum and approximate the derivative at t _{3} by \(V_{m}^{\prime } (t_{3}) \approx \frac {V_{5}  V_{3}}{t_{5} \:  \: t_{3}}\) The value of g is then determined to be \(g = \frac {V_{5}  V_{3}}{(V_{4}  V_{3})(t_{5} \:  \: t_{3})}\) which reflects the asymptotic nature of the hyperpolarization phase of the potential. Clearly, we can model an inhibitory pulse,mutatis mutandi.
The BFV functional form
Modulation of the BFV parameters
Similar equations can be derived for the other two width parameters for caps f _{12} and f _{3}. These sorts of equations give us design principles for complex neurotransmitter modulations of a BFV.
Modulation via the BFV ball and stick model
The BFV model we build consists of a dendritic system and a computational core which processes BFV input sequence to generate a BFV output. The standard HodgkinHuxley equations tell us
Since the BFV is structured so that the action potential has a maximum at t _{1} of value V _{1} and a minimum at t _{3} of value V _{3}, we have \(V_{m}^{\prime }(t_{1}) \: = \: 0\) and \(V_{m}^{\prime }(t_{3}) \: = \: 0\). This gives
We also know that as t goes to infinity, the action potential flattens and \(V_{m}^{\prime }\) approaches 0. Also, the applied current, I _{ E } is zero and so we must have
We have V _{ ∞ } is V _{4}. Thus,
This gives, letting and be denoted by and for simplicity of exposition,
Thus, letting u=g(t−t _{3}), \(\frac {1}{2} = \frac {e^{2u}  1}{e^{2u} + 1}\) and we find \( u \: = \: \frac {\ln (3)}{2}\). Solving for t, we then have \(t^{\ast } = t_{3} \: + \: \frac {\ln (3)}{2g}\). From t _{3} on, the Hodgkin  Huxley dynamics are
We want the values of the derivatives to match at t ^{∗}. This gives
where \(V^{\ast } \: = \: \frac {1}{2}(V_{3} + V_{4})\). Now \(g(t^{\ast }t_{3}) \: = \: \frac {\ln (3)}{2}\) and thus we find
This gives \(\frac {\partial g}{\partial g_{K}^{Max}} \approx 710.1\). Equation 25 shows what our intuition tells us: if \(g_{K}^{Max}\) increases, the potassium current is stronger and the hyperpolarization phase is shortened. On the other hand, if \(g_{K}^{Max}\) decreases, the potassium current is weaker and the hyperpolarization phase is lengthened.
Multiple inputs
Given an input sequence of BFV’s into a port on the dendrite of an accepting neuron {V _{ n },V _{ n−1},…,V _{1}} the procedure discussed above computes the combined response that enters that port at a particular time. The inputs into the dendritic system are combined pairwise; V _{2} and V _{1} combine into a V _{ new } which then combines with V _{3} and so on. We can do this at each electrotonic location.
The size of this area then allows us to determine the first and second messenger contributions this input makes to the postneuron.
Results and discussion

When the process that handles this node looks at its input queue, it sees messages M organized into the families M _{ F }, M _{ D } and M _{ S }. We use the combining inputs algorithm to merge multiple inputs in each message class into one input which we will denote by I _{ F }, I _{ D } and S.

The inputs I _{ F }, I _{ D } and S then generate an trigger update using the EPS/ IPS triangle approximation. given by a _{ F }, a _{ D }, and a _{ S }. The latter two are second messengers. Recall, a second messenger trigger T creates activated kinase and each unit of T creates λ units of where λ is quite large – perhaps 10,000 or more times the base level of . Thus, letting and , we know . Denote the actual trigger updates by δ _{ F }, δ _{ D }, and δ _{ S }. Hence, we can model each of the two second messenger trigger changes as$$\begin{array}{@{}rcl@{}} \boldsymbol{\delta_{D}} &\propto& (2 \beta_{D} + {\beta_{D}^{2}}) \boldsymbol{a_{D}} ~\text{and}~ \boldsymbol{\delta_{S}} \propto (2 \beta_{S} + {\beta_{S}^{2}}) \boldsymbol{a_{S}}. \end{array} $$From our discussion of how the BFV is altered by triggers given in Section “Modulation of the BFV parameters”, it is clear a second messenger trigger update initiates changes in the BFV. The literature on the effects a neurotransmitter has on the action potential of an excitable neuron gives us specific information about what parts of the action potential are changed. We do not discuss these details here for brevity. Suffice it to say, we can assign each neurotransmitter to a BVF alteration. Thus, letting the proportionality constants above be K _{ D } and K _{ S }$$\begin{array}{@{}rcl@{}} \boldsymbol{\delta_{D}} &=& \left(2 \beta_{D} + {\beta_{D}^{2}}\right) \boldsymbol{a_{D}} \boldsymbol{K_{D}} \: \Longrightarrow \: \nabla_{D}(BFV)\\ \boldsymbol{\delta_{S}} &=& \left(2 \beta_{S} + {\beta_{S}^{2}}\right) \boldsymbol{a_{S}} \boldsymbol{K_{S}} \: \Longrightarrow \: \nabla_{S}(BFV) \end{array} $$where the gradients here are the specific changes in the 11 parameters of the BFV that each neurotransmitter causes. Also, recall the efficacy of the second messenger release depends r _{ u }, the rate of reuptake in the connection between two nodes, the second messenger destruction rate r _{ d }, the rate of second messenger release, r _{ r } and the density of the second messenger receptor, n _{ d }. The triple (r _{ u },r _{ d },r _{ r })) thus determines a net increase or decrease of second messenger concentration between two nodes: r _{ r }−r _{ u }−r _{ d }≡r _{ net }. The efficacy of a connection between nodes is then proportional to the product r _{ net }×n _{ d }. The density of the second messenger receptor is amenable to second messenger alterations via triggers as well. If we change our update equations to$$\begin{array}{@{}rcl@{}} \boldsymbol{\delta_{D}} &=& (r_{r}  r_{u}  r_{d}) \: \left(2 \beta_{D} + {\beta_{D}^{2}}\right) \boldsymbol{a_{D}} \boldsymbol{K_{D}}\\ \boldsymbol{\delta_{S}} &=& \left(2 \beta_{S} + {\beta_{S}^{2}}\right) \boldsymbol{a_{S}} \boldsymbol{K_{D}}. \end{array} $$
by absorbing the n _{ d } term into the proportionality constants, we have a mechanism that allows us to model neurotransmitter interaction in the synaptic cleft.

For the input I _{ F } which is a first messenger type, we know this will cause a change in maximum sodium or potassium conductance and perhaps more complicated combinations. These were explored in Section “Modulation of the BFV parameters”. For example, we can estimate how the parameter V _{1} of the BFV changes via$$\begin{array}{@{}rcl@{}} \frac{\partial V_{1}}{\partial g_{Na}^{Max}} &=& \frac{0.35}{0.20 g_{K}^{Max} \: + \: 0.35 g_{Na}^{Max} \: + \: g_{L} } \: \left (E_{Na}  V_{1} \right) \end{array} $$This tells us$$\begin{array}{@{}rcl@{}} \delta V_{1} &=& \left (\frac{0.35}{0.20 g_{K}^{Max} \: + \: 0.35 g_{Na}^{Max} \: + \: g_{L} } \: \left (E_{Na}  V_{1} \right) \right)\\&&\delta g_{Na}^{Max}. \end{array} $$Then, from Section “Second messengers”, we know$$\begin{array}{@{}rcl@{}} \delta g_{Na}(t,V) &=& g_{Na}^{max}(e \delta_{Na} \: \boldsymbol{\sigma}([P(T_{1})], 0, g_{Na})~)\\&&\mathcal{M}_{Na}^{p}(t,V) \: \mathcal{H}_{Na}^{q}(t,V) \end{array} $$which we can approximate using the input a _{ F }. Thus, δ g _{ Na }(t,V)∝a _{ F } with proportionality constant K _{ F }. Thus,$$\begin{array}{@{}rcl@{}} \delta V_{1} &=& \left (\frac{0.35}{0.20 g_{K}^{Max} \: + \: 0.35 g_{Na}^{Max} \: + \: g_{L} } \: \left (E_{Na}  V_{1} \right) \right)\\&&\boldsymbol{K_{F}} \boldsymbol{a_{F}} \end{array} $$
is a reasonable approximation to this alteration of the BFV due to this first messenger input.
Hence, our processing node handles the messages in its input queue using simple arithmetic and a few stored parameters. Each edge process computes the r _{ net } term required but it is still simpler as it only has to connect the BFV from one neuron to another.
Conclusions
We have shown how to approximate neuronal computation for both first and second messenger systems so that a graph model \(\boldsymbol {\mathcal {G}(N,E)}\) can be implemented efficiently in a modern functional programming language such as Erlang. Other languages are possible but our focus was on Erlang alone here. The simulation of a brain model then can take advantage of as many cores as are available on our hardware. Erlang is not designed for heavy computation, so this is why we have spent so much time discussing ways to approximate neural computations at each node and the synaptic processing. It is important to note that new and interesting simulations require us to pay much closer attention to the actual hardware we will be using. Hence, while the details of the multicore use might change with new hardware, the basic principles of how we combine computational algorithms to hardware via software will be retained.
Declarations
Authors’ Affiliations
References
 Armstrong, J. (2013). Programming Erlang Second Edition: Software for a Concurrent World. Dallas, TX: The Pragmatic Bookshelf.Google Scholar
 Bray, D (1998). Signalling complexes: Biophysical constraints on intracellular communication. Annual Review of Biophysics and Biomolecular Structure, 27, 59–75.View ArticleGoogle Scholar
 Friston, K (2005). A theory of cortical responses. Philosophical Transactions of the Royal Society B, 360, 815–836.View ArticleGoogle Scholar
 Friston, K (2010). The freeenergy principle: a unified brain theory?Nature Reviews: Neuroscience, 11, 127–138.View ArticleGoogle Scholar
 Gerhart, J, & Kirschner, M. (1997). Cells, Embryos and Evolution: Towards a Cellular and Developmental Understanding of Phenotypic Variation and Evolutionary Adaptability. USA: Blackwell Science.Google Scholar
 Hille, B. (1992). Ionic Channels of Excitable Membranes. New York: Sinauer Associates Inc.Google Scholar
 Li, X, Xia, S, Bertish, H, Branch, C, DeLisi, L (2012). Unique topology of language processing brain network: A systemslevel biomarker of schizophrenia. Schizophrenia Research, 141, 128–136.View ArticleGoogle Scholar
 Maia, T, & Frank, M (2011). From reinforcement learning models to psychiatric and neurological disorders. Nature Neuroscience, 14(2), 154–162.View ArticleGoogle Scholar
 Peterson, J (2014a). Computation in networks. Computational Cognitive Science. This issue.
 Peterson, J (2014b). BioInformation Processing, A Primer on Computational Cognitive Science. In Preparation, Cognitive Science and Technology Series. Springer, Singapore.Google Scholar
 Peterson, J (2014c). Calculus for Cognitive Scientists: Partial Differential Equation Models. In Preparation, Cognitive Science and Technology Series. Springer, Singapore.Google Scholar
 Peterson, J, & Khan, T (2006). Abstract action potential models for toxin recognition. Journal of Theoretical Medicine, 6(4), 199–234.MathSciNetView ArticleGoogle Scholar
 Russo, S, & Nestler, E (2013). The brain reward circuitry in mood disorders. Nature Reviews: Neuroscience, 14, 609–625.View ArticleGoogle Scholar
 Sherman, S (2004). Interneurons and triadic circuitry of the thalamus. Trends in Neuroscience, 27(11), 670–675.View ArticleGoogle Scholar
Copyright
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.