The abstract neuron model
It is clear neuron classes can have different trigger characteristics. First, consider the case of neurons which create the monoamine neurotransmitters. Neurons of this type in the Reticular Formation of the midbrain produce a monoamine neurotransmitter packet at the synaptic junction between the axon of the preneuron and the dendrite of the postneuron. The monoamine neurotransmitter is then released into the synaptic cleft and it induces a second messenger response. The strength of this response is dependent on the BFV input from preneurons that form synaptic connections with the postneuron. The strength of this input determines the strength of the monoamine trigger into the postneuron dendrite. Let the strength for neurotransmitter ζ be given by the weighting term \(c^{\zeta }_{pre,post}\). The trigger at time t and dendrite location w on the dendritic cable is therefore
$$\begin{array}{@{}rcl@{}} T_{0}(t,w) &=& \frac{c^{\zeta}_{pre,post}}{\sqrt{tt_{0}}}\exp \left (\frac{(ww_{0})^{2}}{4D^{\zeta}_{0}(tt_{0})} \right). \end{array} $$
where \(D^{\zeta }_{0}\) is the diffusion constant associated with the trigger. The trigger T
_{0} has associated with it the protein T
_{1}. We let
$$\begin{array}{@{}rcl@{}} T_{1}(t,w) &=& \frac{d^{\zeta}_{pre,post}}{\sqrt{tt_{0}}}\exp \left (\frac{(ww_{0})^{2}}{4D^{\zeta}_{1}(tt_{0})} \right). \end{array} $$
where \(d^{\zeta }_{pre,post}\) denotes the strength of the induced T
_{1} response and \(D^{\zeta }_{1}\) is the diffusion constant of the T
_{1} protein. This trigger will act through the usual pathway. Also, we let T
_{2} denote the protein P(T
_{1}). T
_{2} transcribes a protein target from the genome with efficiency e.
$$ {\fontsize{9.6}{6} \begin{aligned} h_{p}(\left[T_{0}(t,w)\right]) &= \frac{r}{2} \left(~ 1 + \tanh\left(\frac{\left[T_{0}(t,w)\right]  \left[T_{0}\right]_{b}}{g_{p}} \right) ~ \right)\\ I(t,w)) &= \frac{h_{p}(\left[T_{0}(t,w)\right])\left[T_{1}\right]_{n}}{\sqrt{tt_{0}}} \exp \left(\frac{(ww_{0})^{2}}{4D^{\lambda}_{1}(tt_{0})} \right)\\ h_{e}(I(t,w)) &= \frac{s}{2} \left(~ 1 + \tanh \left(\frac{I(t,w) \frac{r[T_{1}]_{n}}{2}}{g_{e}} \right) ~ \right)\\ {\left[P(T_{1})\right]}(t,w) &= h_{e}(I(t,w))\\ h_{T_{2}}(t,w) &= \frac{e}{2} \: \left(~1 + \tanh\left(\frac{[T_{2}](t,w)}{g_{T_{2}}}\right) ~ \right)\\ {[\!T_{2}]}(t,w) &= h_{T_{2}}(t,w) [\!T_{2}]_{n} \end{aligned}} $$
Note [ T
_{2}](t,w) gives the value of the protein T
_{2} concentration at some discrete time t and spatial location w. This response can also be modulated by feedback. In this case, let ξ denote the feedback level. Then, the final response is altered to \(h_{T_{2}}^{f}\) where the superscript f denotes the feedback response and the constant ω is the strength of the feedback; we have \(h_{T_{2}}^{f}(t,w) = \omega \frac {1}{\xi } h_{T_{2}}(t,w)\) and \({[\!T_{2}]}(t,w) = h_{T_{2}}^{f}(t,w) [\!T_{2}]_{n}\). There are a large number of shaping parameters here. For example, for each neurotransmitter, we could alter the parameters due to calcium trigger diffusion. These would include \(D^{\zeta }_{0}\), the diffusion constant for the trigger, and \(D^{\zeta }_{1}\), the diffusion constant for the gate induced protein T
_{1}. In addition, transcribed proteins could alter – we know their first order quantitative effects due to our earlier analysis – \(d^{\zeta }_{pre,post}\), the strength of the T
_{1} response, r, the fraction of T
_{1} free, g
_{
p
}, the trigger gain, [T
_{0}]_{
b
}, the trigger threshold concentration, s, the fraction of active T
_{1} reaching genome, g
_{
e
}, the trigger gain for active T
_{1} transition, [T
_{1}]_{
n
}, the threshold for T
_{1}, [T
_{2}]_{
n
}, the threshold for P(T
_{1})=T
_{2}, \(g_{T_{2}}\), the gain for T
_{2}, ω, the feedback strength, and ξ, the feedback amount for T
_{1}=1−ξ. Note \(d^{\zeta }_{pre,post}\) could be simply \(c^{\lambda }_{pre,post}\). The neurotransmitter triggers can alter many parameters important to the creation of the BFV. For example, the maximum sodium and potassium conductances can be altered via the equation for T
_{2}. For sodium, \(\phantom {\dot {i}\!}T_{2}(t,w) = h_{T_{2}}(t,w) [T_{2}]_{n}\) becomes
$$\begin{array}{@{}rcl@{}} {[T_{2}]}_{n} &=& \delta_{Na} \: g_{Na}^{max}\\ h_{T_{2}}^{f}(t,w) &=& \omega \frac{1}{\xi} h_{T_{2}}(t,w)\\ {[\!T_{2}]}(t,w) &=& h_{T_{2}}^{f}(t,w) [\!T_{2}]_{n}\\ g_{Na}(t,w,V) &=& \left(~g_{Na}^{max} + [\!T_{2}](t,w)~\right) \mathcal{M}_{Na}^{p}(t,V) \mathcal{H}_{Na}^{q}(t,V) \end{array} $$
There would be a similar set of equations for potassium. Finally, neurotransmitters and other second messenger triggers have delayed effects in general. So if the trigger T
_{0} binds with a port P at time t
_{0}, the changes in protein levels P(T
_{1}) might also need to be delayed by a factor τ
^{ζ}.
Abstract neuron design
We can see the general structure of a typical action potential is illustrated in Figure 8.
We can use the following points on this generic action potential to construct a low dimensional feature vector of Equation 12.
$$\begin{array}{@{}rcl@{}} \xi &=& \left \{ \begin{array}{ll} (t_{0}, V_{0}) & \text{start point}\\ (t_{1}, V_{1}) & \text{maximum point}\\ (t_{2}, V_{2}) & \text{return to reference voltage}\\ (t_{3}, V_{3}) & \text{minimum point}\\ (g, t_{4}, V_{4}) & \text{sigmoid tail model} \end{array} \right. \end{array} $$
((12))
where the model of the tail of the action potential is of the form V
_{
m
}(t)=V
_{3} + (V
_{4}−V
_{3}) tanh(g(t−t
_{3})). Note that \(V_{m}^{\prime } (t_{3}) = (V_{4}  V_{3}) \: \: g\) and so if we were using real voltage data, we would approximate \(V_{m}^{\prime } (t_{3})\) by a standard finite difference. The biological feature vector therefore stores many of the important features of the action potential is a low dimensional form. We note these include

The interval [t
_{0},t
_{1}] is the duration of the rise phase. This interval can be altered or modulated by neurotransmitter activity on the nerve cell’s membrane as well as second messenger signaling from within the cell.

The height of the pulse, V
_{1}, is an important indicator of excitation.

The time interval between the highest activation level, V
_{1} and the lowest, V
_{3}, is closely related to spiking interval. This time interval, [t
_{1},t
_{3}], is also amenable to alteration via neurotransmitter input.

The height of the depolarizing pulse, V
_{4}, helps determine how long it takes for the neuron to reestablish its reference voltage, V
_{0}.

The neuron voltage takes time to reach reference voltage after a spike. This is the time interval by the interval [t
_{3},∞].

The exponential rate of increase in the time interval [t
_{3},∞] is also very important to the regaining of nominal neuron electrophysiological characteristics.
We have shown the BFV captures the characteristics of the output pulse well enough to classify neurotransmitter inputs on the basis of how they change the BFV (Peterson and Khan 2006) and we will now use modulations of the BFV induced by second messengers in our nodal computations. The feature vector output of a neural object is due to the cumulative effect of second messenger signaling to the genome of this object which influences the action potential and thus feature vector of the object by altering its complicated mixture of ligand and voltage activated ion gates, enzymes and so forth. We can then define an algebra of output interactions that we can use in building the models. We motivate our approach using the basic Hodgkin  Huxley model which depends on a large number of parameters. Of course, more sophisticated action potential models can be used, but the standard two ion gate Hodgkin  Huxley model is sufficient for our needs here.
Using the vector ξ from Equation 12, we can construct the BVF. Note for the sigmoid tail model, we have \(V_{m}^{\prime } (t_{3}) = (V_{4}  V_{3}) \: \: g\) and we can approximate \(V_{m}^{\prime } (t_{3})\) by a standard finite difference. We pick a data point (t
_{5},V
_{5}) that occurs after the minimum – typically we use the voltage value at the time t
_{5} that is 5 time steps downstream from the minimum and approximate the derivative at t
_{3} by \(V_{m}^{\prime } (t_{3}) \approx \frac {V_{5}  V_{3}}{t_{5} \:  \: t_{3}}\) The value of g is then determined to be \(g = \frac {V_{5}  V_{3}}{(V_{4}  V_{3})(t_{5} \:  \: t_{3})}\) which reflects the asymptotic nature of the hyperpolarization phase of the potential. Clearly, we can model an inhibitory pulse,mutatis mutandi.
The BFV functional form
In Figure 9, we indicated the three major portions of the biological feature vector and the particular data points chosen from the action potential which are used for the model. These are the two parabolas f
_{1} and f
_{2} and the sigmoid f
_{3}. The parabola f
_{1} is treated as the two distinct pieces f
_{11} and f
_{12} given by
$$\begin{array}{@{}rcl@{}} f_{11}(t) &=& a^{11} + b^{11}(tt_{1})^{2} \end{array} $$
((13))
$$\begin{array}{@{}rcl@{}} f_{12}(t) &=& a^{12} + b^{12}(tt_{1})^{2} \end{array} $$
((14))
Thus, f
_{1} consists of two joined parabolas which both have a vertex at t
_{1}. The functional form for f
_{2} is a parabola with vertex at t
_{3}: f
_{2}(t)=a
^{2}+b
^{2}(t−t
_{3})^{2} Finally, the sigmoid portion of the model is given by f
_{3}(t)=V
_{3}+(V
_{4}−V
_{3}) tanh(g(t−t
_{3})) We have also simplified the BFV even further by dropping the explicit time point t
_{4} and modeling the portion of the action potential after the minimum voltage by the sigmoid f
_{3}. From the data, it follows that
$$\begin{aligned} f_{11}(t_{0}) &= V_{0} \: = \: a^{2} + b^{11}(t_{0}t_{1})^{2}, \: \: \text{ and }\\ f_{11}(t_{1}) &= V_{1} \: = \: a^{11}\;\; f_{12}(t_{1}) = V_{1} \: = \: a^{12}, \: \: \text{ and }\\ f_{11}(t_{2}) &= V_{2} \: = \: a^{12} + b^{12}(t_{2}t_{1})^{2} \end{aligned} $$
This implies
$$\begin{aligned} a^{11} &= V_{1}, \: \: b^{11} = \frac{V_{0}V_{1}}{(t_{0}t_{1})^{2}}, \: \: a^{12} = V_{1}, \: \: \text{ and }\\ b^{12} &= \frac{V_{2}V_{1}}{(t_{2}t_{1})^{2}} \end{aligned} $$
In a similar fashion, the f
_{2} model is constrained by
$$\begin{array}{@{}rcl@{}} f_{2}(t_{2}) &=& V_{2} \: = \: a^{2} + b^{2}(t_{2}t_{3})^{2} \: \: \text{and } f_{2}(t_{3}) = V_{3} \: = \: a^{2} \end{array} $$
We conclude that a
^{2}=V
_{3} and \(b^{2} = \frac {V_{2}V_{3}}{(t_{2}t_{3})^{2}}\). Hence, the functional form of the BFV model can be given by the mapping f of equation 15.
$$ {\fontsize{9.5}{6}\begin{aligned} f(t) = \left\{ \begin{array}{ll} V_{1} + \frac{V_{0}V_{1}}{(t_{0}t_{1})^{2}}(tt_{1})^{2}, & t_{0} \: \leq \: t \: \leq \: t_{1}\\ V_{1} + \frac{V_{2}V_{1}}{(t_{2}t_{1})^{2}}(tt_{1})^{2}, & t_{1} \: \leq \: t \: \leq \: t_{2}\\ V_{3} + \frac{V_{2}V_{3}}{(t_{2}t_{3})^{2}}(tt_{3})^{2}, & t_{2} \: \leq \: t \: \leq \: t_{3}\\ V_{4} + (V_{4}  V_{3}) \tanh(g (tt3)), & t_{3} \: \leq \: t \: < \infty\\ \end{array}\right. \end{aligned}} $$
((15))
All of our parabolic models can also be written in the form \(p(t) = \pm \frac {1}{4 \beta } (t  \alpha)\) where 4β is the width of the line segment through the focus of the parabola. The models f
_{11} and f
_{12} point down and so use the “minus” sign while f
_{2} uses the “plus”. By comparing our model equations with this generic parabolic equation, we find the width of the parabolas of f
_{11}, f
_{12} and f
_{2} is given by
$${\begin{aligned} 4\beta_{11}&=\! \frac{(t_{0}t_{1})^{2}}{V_{1}V_{0}} \: = \!\: \frac{1}{b^{11}}, \: \: 4\beta_{12} = \frac{(t_{2}t_{1})^{2}}{V_{1}V_{2}} \: = \: \frac{1}{b^{12}} \: \text{and } \\ 4\beta_{2} &= \frac{(t_{2}t_{3})^{2}}{V_{2}V_{3}} \: = \: \frac{1}{b^{2}}. \end{aligned}} $$
Modulation of the BFV parameters
We want to modulate the output of our abstract neuron model by altering the BFV. The BFV itself consists of 11 parameters, but better insight, into how alterations of the BFV introduce changes in the action potential we are creating, comes from studying changes in the mapping f given in Section “The BFV functional form”. In addition to changes in timing, t
_{0}, t
_{1}, t
_{2} and t
_{3}, we can also consider the variations of Equation 16.
$$ \begin{aligned} \left[\begin{array}{l} \Delta a^{11}\\ \Delta b^{11}\\ \Delta a^{12}\\ \Delta b^{12}\\ \Delta a^{2} \\ \Delta b^{2} \end{array}\right ] &=\left [ \begin{array}{l} \Delta V_{1}\\ \Delta \left(\frac{V_{0}V_{1}}{(t_{0}t_{1})^{2}}\right)\\ \Delta V_{1}\\ \Delta \left(\frac{V_{2}V_{1}}{(t_{2}t_{1})^{2}}\right)\\ \Delta V_{3}\\ \Delta \left(\frac{V_{2}V_{3}}{(t_{2}t_{3})^{2}}\right) \end{array} \right] \\ &= \left [ \begin{array}{l} \Delta \text{Maximum Voltage }\\ \Delta \left(\frac{1}{4\beta_{11}}\right)\\ \Delta \text{Maximum Voltage }\\ \Delta \left(\frac{1}{4\beta_{12}}\right)\\ \Delta \text{Minimum Voltage }\\ \Delta \left(\frac{1}{4\beta_{2}}\right) \end{array} \right] \end{aligned} $$
((16))
It is clear that modulatory inputs that alter the cap shape and hyperpolarization curve of the BFV functional form can have a profound effect on the information contained in the “action potential”. For example, a hypothetical neurotransmitter that alters V
_{1} will also alter the latis rectum distance across the cap f
_{1}. Further, direct modifications to the latis rectum distance in any of the two caps f
_{11} and f
_{12} can induce corresponding changes in times t
_{0}, t
_{1} and t
_{2} and voltages V
_{0}, V
_{1} and V
_{2}. A similar statement can be made for changes in the latis rectum of cap f
_{2}. For example, if a neurotransmitter induced a change of, say 1% in 4β
_{11}, this would imply that \(\Delta \biggl (\frac {V_{1}V_{0}}{(t_{0}t_{1})^{2}}\biggr) \: = \:.04 \beta _{11}^{0}\) where \(\beta _{11}^{0}\) denotes the original value of \(\beta _{11}^{0}\). Thus, to first order
$$ \begin{aligned}.04 \beta_{11}^{0} &=\left(\frac{\partial \beta_{11}}{\partial V_{0}}\right)^{\ast} \Delta V_{0}+ \left(\frac{\partial \beta_{11}}{\partial V_{1}}\right)^{\ast} \Delta V_{1}+ \left(\frac{\partial \beta_{11}}{\partial t_{0}}\right)^{\ast} \Delta t_{0}\\ &\quad+\left(\frac{\partial \beta_{11}}{\partial t_{1}}\right)^{\ast} \Delta t_{1} \end{aligned} $$
((17))
where the superscript ∗ on the partials indicates they are evaluated at the base point (V
_{0},V
_{1},t
_{0},t
_{1}). Taking partials we find
$$\begin{aligned} \frac{\partial \beta_{11}}{\partial V_{0}} &= 2\frac{(t_{0}t_{1})^{2}}{(V_{1}V_{0})^{2}} \: = \: \frac{2}{V_{1}V_{0}} \beta_{11}^{0},\\ \frac{\partial \beta_{11}}{\partial V_{1}} &= 2\frac{(t_{0}t_{1})^{2}}{(V_{1}V_{0})^{2}} \: = \: \frac{2}{V_{1}V_{0}} \beta_{11}^{0}\\ \frac{\partial \beta{11}}{\partial t_{0}} &= 2\frac{t_{0}t_{1}}{V_{1}V_{0}} \: = \: \frac{2}{t_{0}t_{1}} \beta_{11}^{0},\\ \frac{\partial \beta{11}}{\partial t_{1}} &= 2\frac{t_{0}t_{1}}{V_{1}V_{0}} \: = \: \frac{2}{t_{0}t_{1}} \beta_{11}^{0}\\ \end{aligned} $$
Thus, Equation 17 becomes
$$ \begin{aligned}.04 \beta_{11}^{0} &= \frac{2 \Delta V_{0}}{V_{1}V_{0}} \beta_{11}^{0}  \frac{2 \Delta V_{1}}{V_{1}V_{0}} \beta_{11}^{0} + 2 \Delta t_{0} \: \frac{1}{t_{0}t_{1}} \beta_{11}^{0}\\ &\quad 2 \Delta t_{1} \: \frac{1}{t_{0}t_{1}} \beta_{11}^{0} \end{aligned} $$
This simplifies to
$$\begin{array}{@{}rcl@{}}.02(V_{1}V_{0})(t_{0}t_{1}) &=&(\Delta V_{0}  \Delta V_{1})(t_{0}t_{1})\\ && \: (\Delta t_{0}  \Delta t_{1})(V_{1}V_{0}) \end{array} $$
Since we can do this analysis for any percentage r of \(\beta _{11}^{0}\), we can infer that a neurotransmitter that modulates the action potential by perturbing the “width” or latis rectum of the cap of f
_{11} can do so satisfying the equation
$$\begin{array}{@{}rcl@{}} 2r (V_{1}V_{0})(t_{1}t_{0}) &=&(\Delta V_{0}  \Delta V_{1})(t_{0}t_{1}) \\ && \: (\Delta t_{0}  \Delta t_{1})(V_{1}V_{0}) \end{array} $$
Similar equations can be derived for the other two width parameters for caps f
_{12} and f
_{3}. These sorts of equations give us design principles for complex neurotransmitter modulations of a BFV.
Modulation via the BFV ball and stick model
The BFV model we build consists of a dendritic system and a computational core which processes BFV input sequence to generate a BFV output. The standard HodgkinHuxley equations tell us
Since the BFV is structured so that the action potential has a maximum at t
_{1} of value V
_{1} and a minimum at t
_{3} of value V
_{3}, we have \(V_{m}^{\prime }(t_{1}) \: = \: 0\) and \(V_{m}^{\prime }(t_{3}) \: = \: 0\). This gives
From Figure 10 and Figure 11, we see that for typical action potential simulation responses, we have m
^{3}(V
_{1},t
_{1})h(V
_{1},t
_{1}) ≈ 0.35 and n
^{4}(V
_{1},t
_{1}) ≈ 0.2. Further, m
^{3}(V
_{3},t
_{3})h(V
_{3},t
_{3}) ≈ 0.01 and n
^{4}(V
_{3},t
_{3}) ≈ 0.4.
Thus,
$$\begin{array}{@{}rcl@{}} I_{E}(t_{1}) &=& 0.20 g_{K}^{Max} \left(V_{1}  E_{K}\right) \: + \: 0.35 g_{Na}^{Max} (V_{1}  E_{Na})\\ &&+ \: g_{L} (V_{1}  E_{L})\\ [4pt] I_{E}(t_{3}) &=& 0.40 g_{K}^{Max} (V_{3}  E_{K}) \: + \: 0.01g_{Na}^{Max} (V_{3}  E_{Na})\\ &&+ \: g_{L} (V_{3}  E_{L}) \end{array} $$
Reorganizing,
$$\begin{array}{@{}rcl@{}} I_{E}(t_{1}) &=& \left (0.20 g_{K}^{Max} \: + \: 0.35 g_{Na}^{Max} \: + \: g_{L} \right) V_{1}\\ && \: \left (0.20 g_{K}^{Max} E_{K} \: + \: 0.35 g_{Na}^{Max} E_{Na} \: + \: g_{L} E_{L} \right)\\ [4pt] I_{E}(t_{3}) &=& \left (0.40 g_{K}^{Max} \: + \: 0.01 g_{Na}^{Max} \: + \: g_{L} \right) V_{3}\\ && \: \left (0.40 g_{K}^{Max} E_{K} \: + \: 0.01 g_{Na}^{Max} E_{Na} \: + \: g_{L} E_{L} \right) \end{array} $$
Solving for the voltages, we find
$$\begin{array}{@{}rcl@{}} V_{1} &=& \frac{I_{E}(t_{1}) \: + \: 0.20 g_{K}^{Max} E_{K} \: + \: 0.35 g_{Na}^{Max} E_{Na} \: + \: g_{L} E_{L}} {0.20 g_{K}^{Max} \: + \: 0.35 g_{Na}^{Max} \: + \: g_{L} }\\ V_{3} &=& \frac{I_{E}(t_{3}) \: + \: 0.40 g_{K}^{Max} E_{K} \: + \: 0.01 g_{Na}^{Max} E_{Na} \: + \: g_{L} E_{L}} {0.40 g_{K}^{Max} \: + \: 0.01 g_{Na}^{Max} \: + \: g_{L}}\\ \end{array} $$
Thus,
$${ \fontsize{9.1}{6}\begin{aligned} \frac{\partial V_{1}}{\partial g_{K}^{Max}} &= 0.20 E_{K} \frac{1}{0.20 g_{K}^{Max} \: + \: 0.35 g_{Na}^{Max} \: + \: g_{L}}\\ &\quad+ \: \frac{I_{E}(t_{1}) \: + \: 0.20 g_{K}^{Max} E_{K} \: + \: 0.35 g_{Na}^{Max} E_{Na} \: + \: g_{L} E_{L}} {0.20 g_{K}^{Max} \: + \: 0.35 g_{Na}^{Max} \: + \: g_{L} }\\ &\qquad \frac{1.0} {0.20 g_{K}^{Max} \: + \: 0.35 g_{Na}^{Max} \: + \: g_{L} } \:.20 \end{aligned}} $$
This simplifies to
$$ \frac{\partial V_{1}}{\partial g_{K}^{Max}} = \frac{0.20}{0.20 g_{K}^{Max} \: + \: 0.35 g_{Na}^{Max} \: + \: g_{L} } \: \left(E_{K}  V_{1} \right) $$
((18))
Similarly, we find
$$ \frac{\partial V_{1}}{\partial g_{Na}^{Max}} = \frac{0.35}{0.20 g_{K}^{Max} \: + \: 0.35 g_{Na}^{Max} \: + \: g_{L} } \: \left(E_{Na}  V_{1} \right) $$
((19))
$$ \frac{\partial V_{3}}{\partial g_{K}^{Max}} = \frac{0.40}{0.40 g_{K}^{Max} \: + \: 0.01 g_{Na}^{Max} \: + \: g_{L} } \: \left(E_{K}  V_{3} \right) $$
((20))
$$ \frac{\partial V_{3}}{\partial g_{Na}^{Max}} = \frac{0.40}{0.40 g_{K}^{Max} \: + \: 0.01 g_{Na}^{Max} \: + \: g_{L} } \: \left(E_{Na}  V_{3} \right) $$
((21))
We also know that as t goes to infinity, the action potential flattens and \(V_{m}^{\prime }\) approaches 0. Also, the applied current, I
_{
E
} is zero and so we must have
Our hyperpolarization model is
$$\begin{array}{@{}rcl@{}} Y(t) &=& V_{3} + (V_{4}  V_{3}) \: \tanh \left(g (t  t_{3}) \right) \end{array} $$
We have V
_{
∞
} is V
_{4}. Thus,
This gives, letting and be denoted by and for simplicity of exposition,
Hence, letting and , we have
$${\fontsize{8.5}{6}\begin{aligned} V_{4} = \frac{g_{K}^{Max} n^{4}(V_{4},\infty) E_{K} \: + \: g_{Na}^{Max} m^{3}(V_{4},\infty) h(V_{4},\infty) E_{Na} \: + \: g_{L} E_{L}} {g_{K}^{Max} n^{4}(V_{4},\infty) \: + \: g_{Na}^{Max} m^{3}(V_{4},\infty) h(V_{4},\infty) \: + \: g_{L}} \end{aligned}} $$
We see
$$ {\fontsize{9.5}{6}\begin{aligned} \frac{\partial V_{4}}{\partial g_{K}^{Max}} &= \frac{n^{4}(V_{4},\infty)} {g_{K}^{Max} n^{4}(V_{4},\infty) + g_{Na}^{Max} m^{3}(V_{4},\infty) h(V_{4},\infty) + g_{L}}\\ &\quad\times\left(E_{K}  V_{4} \right) \end{aligned}} $$
((22))
$$ {\fontsize{9.5}{6} \begin{aligned} \frac{\partial V_{4}}{\partial g_{Na}^{Max}} &= \frac{m^{3}(V_{4},\infty) h(V_{4},\infty)} {g_{K}^{Max} n^{4}(V_{4},\infty) + g_{Na}^{Max} m^{3}(V_{4},\infty) h(V_{4},\infty) + g_{L}}\\& \quad\times\left(E_{Na}  V_{4} \right) \end{aligned}} $$
((23))
We can also assume that the area under the action potential curve from the point (t
_{0},V
_{0}) to (t
_{1},V
_{1}) is proportional to the incoming current applied. If V
_{
In
} is the axon  hillock voltage, the impulse current applied to the axon  hillock is g
_{
In
}
V
_{
In
} where g
_{
In
} is the ball stick model conductance for the soma. Thus, the approximate area under the action potential curve must match this applied current. We have \(\frac {1}{2} \: (t_{1}  t_{0}) \: (V_{1}  V_{0}) \approx g_{\textit {In}} V_{\textit {In}}\) We conclude \((t_{1}  t_{0}) = \frac {2 g_{\textit {In}} V_{\textit {In}}}{V_{1}  V_{0}}\). Thus
$$\begin{array}{@{}rcl@{}} \frac{\partial (t_{1}  t_{0})}{\partial g_{K}^{Max}} &=&  \frac{t_{1}  t_{0}}{V_{1}  V_{0}} \: \frac{\partial V_{1}}{\partial g_{K}^{Max}} \: ~\text{and}\\ \frac{\partial (t_{1}  t_{0})}{\partial g_{Na}^{Max}} &=&  \frac{t_{1}  t_{0}}{V_{1}  V_{0}} \: \frac{\partial V_{1}}{\partial g_{Na}^{Max}} \end{array} $$
((24))
Also, we know that during the hyperpolarization phase, the sodium current is off and the potassium current is slowly bringing the membrane potential back to the reference voltage. Now, our BFV model does not assume that the membrane potential returns to the reference level. Instead, by using
$$\begin{array}{@{}rcl@{}} Y(t) &=& V_{3} + (V_{4}  V_{3}) \: \tanh \left(g (t  t_{3}) \right) \end{array} $$
we assume the return is to voltage level V
_{4}. At the midpoint, \(Y = \frac {1}{2}(V_{3} + V_{4})\), we find
$$\begin{array}{@{}rcl@{}} \frac{1}{2} (V_{4}  V_{3}) &=& (V_{4}  V_{3}) \: \tanh \left(g (t  t_{3}) \right) \end{array} $$
Thus, letting u=g(t−t
_{3}), \(\frac {1}{2} = \frac {e^{2u}  1}{e^{2u} + 1}\) and we find \( u \: = \: \frac {\ln (3)}{2}\). Solving for t, we then have \(t^{\ast } = t_{3} \: + \: \frac {\ln (3)}{2g}\). From t
_{3} on, the Hodgkin  Huxley dynamics are
We want the values of the derivatives to match at t
^{∗}. This gives
where \(V^{\ast } \: = \: \frac {1}{2}(V_{3} + V_{4})\). Now \(g(t^{\ast }t_{3}) \: = \: \frac {\ln (3)}{2}\) and thus we find
Next, consider the magnitude of . We know at t
^{∗}, is small from Figure 11. Thus, we will replace it by the value 0.01. This gives
$$\begin{array}{@{}rcl@{}} \frac{g}{2} \: (V_{4}  V_{3}) \: \frac{9}{64} &=& 0.01 \frac{g_{K}^{Max}}{C_{m}} \: \left(\frac{1}{2}(V_{4} + V_{3})  E_{K} \right) \\ && \: \frac{g_{L}}{C_{m}} \left(\frac{1}{2}(V_{4} + V_{3})  E_{L}\right) \end{array} $$
Simplifying, we have
$$\begin{array}{@{}rcl@{}} \frac{9 g}{128} \: (V_{4}  V_{3}) &=& \left(0.01 \frac{g_{K}^{Max}}{C_{m}} E_{K} \: + \: \frac{g_{L}}{C_{m}} E_{L} \right)\\&& \: \frac{1}{2} \left(0.01 \frac{g_{K}^{Max}}{C_{m}} \: + \: \frac{g_{L}}{C_{m}} \right) (V_{4} + V_{3})\\ \frac{9 g}{64} &=& \left(0.01 \frac{g_{K}^{Max}}{C_{m}} E_{K} \: + \: \frac{g_{L}}{C_{m}} E_{L} \right) \frac{1}{V_{4}  V_{3}} \\ && \: \left(0.01 \frac{g_{K}^{Max}}{C_{m}} \: + \: \frac{g_{L}}{C_{m}} \right) \frac{V_{4} + V_{3}}{V_{4}  V_{3}} \end{array} $$
We can see clearly from the above equation, that the dependence of g on \(g_{K}^{Max}\) and \(g_{\textit {Na}}^{max}\) is quite complicated. However, we can estimate this dependence as follows. We know that V
_{3}+V
_{4} is about the reference voltage, −65.9mV. If we approximate V
_{3} by the potassium battery voltage, E
_{
k
}=−72.7mV and V
_{4} by the reference voltage, we find \(\frac {V_{3} + V_{4}}{V_{4}  V_{3}} \: \approx \: \frac {138.6}{6.8} \: = \: 20.38\) and \(\frac {1}{V_{4}  V_{3}} \: \approx \: \frac {1}{6.8} \: = \: 0.147\). Hence,
$$\begin{array}{@{}rcl@{}} \frac{9 C_{m} g}{64} &=& \: 0.147 \left(0.01 g_{K}^{Max} E_{K} + g_{L} E_{L} \right)\\ &&+ \: 20.38 \left(0.01 g_{K}^{Max} + g_{L} \right)\\ &=& \left(0.0147 E_{K} + 2.038 E_{L} \right) g_{K}^{Max}\\ &&+ g_{L} \left(0.0147 E_{L} + 20.38 \right) \end{array} $$
Thus, we find
$$\begin{array}{@{}rcl@{}} \frac{\partial g}{\partial g_{K}^{Max}} &=& \frac{64}{9 C_{m}} \left(0.0147 E_{K} + 2.038 E_{L} \right) \end{array} $$
((25))
This gives \(\frac {\partial g}{\partial g_{K}^{Max}} \approx 710.1\). Equation 25 shows what our intuition tells us: if
\(g_{K}^{Max}\)
increases, the potassium current is stronger and the hyperpolarization phase is shortened. On the other hand, if
\(g_{K}^{Max}\)
decreases, the potassium current is weaker and the hyperpolarization phase is lengthened.
Multiple inputs
Consider a typical input V(t) which is determined by a BFV vector. Without loss of generality, we will focus on excitatory inputs in our discussions. The input consists of a three distinct portions. First, a parabolic cap above the equilibrium potential determined by the values (t
_{0},V
_{0}), (t
_{1},V
_{1}), (t
_{2},V
_{2}). Next, the input contains half of another parabolic cap dropping below the equilibrium potential determined by the values (t
_{2},V
_{2}) and (t
_{3},V
_{3}). Finally, there is the hyperpolarization phase having functional form H(t) = V
_{3}+(V
_{4}−V
_{3}) tanh(g(t−t
_{3})). Now assume two inputs arrive at the same electronic distance L. We label this inputs as A and B as is shown in Figure 12.
For convenience of exposition, we also assume \({t_{3}^{A}} \: < \: {t_{3}^{B}}\), as otherwise, we just reverse the roles of the variables in our arguments. In this figure, we note only the minimum points on the A and B curves. We merge these inputs into a new input V
^{N} prior to the hyperpolarization phase as follows:
$$\begin{array}{@{}rcl@{}} {t_{0}^{N}} &=& \frac{{t_{0}^{A}} + {t_{0}^{B}}}{2}, \: {V_{0}^{N}} = \frac{{V_{0}^{A}} + {V_{0}^{B}}}{2}, \: ~\text{and}~ {t_{1}^{N}} = \frac{{t_{1}^{A}} + {t_{1}^{B}}}{2} \\ {V_{1}^{N}} &=& \frac{{V_{1}^{A}} + {V_{1}^{B}}}{2}, \: {t_{2}^{N}} = \frac{{t_{2}^{A}} + {t_{2}^{B}}}{2}, \: ~\text{and}~ {V_{2}^{N}} = \frac{{V_{2}^{A}} + {V_{2}^{B}}}{2} \end{array} $$
This constructs the two parabolic caps of the new resultant input by averaging the caps of V
^{A} and V
^{B}. The construction of the new hyperpolarization phase is more complicated. The shape of this portion of an action potential has a profound effect on neural modulation, so it is very important to merge the two inputs in a reasonable way. The hyperpolarization phases of V
^{A} and V
^{B} are given by
$$\begin{array}{@{}rcl@{}} H^{A}(t) &=& {V_{3}^{A}} + \left({V_{4}^{A}}  {V_{3}^{A}}\right) \: \tanh \left(g^{A} (t  {t_{3}^{A}}) \right)\\ H^{B}(t) &=& {V_{3}^{B}} + \left({V_{4}^{B}}  {V_{3}^{B}}\right) \: \tanh \left(g^{B} (t  {t_{3}^{B}}) \right) \end{array} $$
We will choose the 4 parameters V
_{3},V
_{4},g,t
_{3} so as to minimize
$$\begin{array}{@{}rcl@{}} E &=& \int_{{t_{3}^{A}}}^{\infty} \: \left(H(t)  H^{A}(t) \right)^{2} \: + \: \left(H(t)  H^{B}(t) \right)^{2} \: dt \end{array} $$
For optimality, we find the parameters where \(\frac {\partial E}{\partial V_{3}}\), \(\frac {\partial E}{\partial V_{4}}\), \(\frac {\partial E}{\partial g}\) and \(\frac {\partial E}{\partial t_{3}}\) are 0. Now,
$$\begin{array}{@{}rcl@{}} \frac{\partial E}{\partial V_{3}} &=&\int_{{t_{3}^{A}}}^{\infty} \: 2 \left\{ \left(H(t)  H^{A}(t) \right) \: + \: \left(H(t)  H^{B}(t) \right) \right\}\\ &&\frac{\partial H}{\partial V_{3}} \: dt \end{array} $$
Further,
$$\begin{array}{@{}rcl@{}} \frac{\partial H}{\partial V_{3}} &=& 1 \:  \: \tanh \left(g (t  t_{3}) \right) \end{array} $$
so we obtain
$$\begin{array}{@{}rcl@{}} 0 &=& \int_{{t_{3}^{A}}}^{\infty} 2 \left\{\left(H(t)  H^{A}(t) \right)+ \left(H(t)  H^{B}(t) \right) \right\}\\ &&\times\left(1  \tanh \left(g (t  t_{3}) \right) \right) dt \end{array} $$
((26))
We also find
$$\begin{array}{@{}rcl@{}} \frac{\partial E}{\partial V_{4}} &=&\int_{{t_{3}^{A}}}^{\infty} \: 2 \left\{ \left(H(t)  H^{A}(t) \right) \: + \: \left(H(t)  H^{B}(t) \right) \right\}\\&&\times\left(\tanh \left(g (t  t_{3}) \right) \right) \: dt \end{array} $$
as
$$\begin{array}{@{}rcl@{}} \frac{\partial H}{\partial V_{4}} &=& \tanh \left(g (t  t_{3}) \right) \end{array} $$
The optimality condition then gives
$$\begin{array}{@{}rcl@{}} 0 &=&\int_{{t_{3}^{A}}}^{\infty} 2 \left\{\left(H(t)  H^{A}(t) \right) + \left(H(t)  H^{B}(t) \right) \right\}\\&&\tanh \left(g (t  t_{3}) \right) dt \end{array} $$
((27))
Combining equation 26 and equation 27, we find
$$\begin{array}{@{}rcl@{}} 0 &=& \int_{{t_{3}^{A}}}^{\infty} \: \left\{ \left(H(t)  H^{A}(t) \right) \: + \: \left(H(t)  H^{B}(t) \right) \right\}\\ &&\tanh \left(g (t  t_{3}) \right) \: dt. \end{array} $$
It follows after simplification, that
$$ 0 =\int_{{t_{3}^{A}}}^{\infty} \: \left\{ \left(H(t)  H^{A}(t) \right) \: + \: \left(H(t)  H^{B}(t) \right) \right\} \: dt $$
((28))
The remaining optimality conditions give
$$\begin{array}{@{}rcl@{}} \frac{\partial E}{\partial g} &=&\int_{{t_{3}^{A}}}^{\infty} \: 2 \left\{ \left(H(t)  H^{A}(t) \right) \: + \: \left(H(t)  H^{B}(t) \right) \right\}\\&&\frac{\partial H}{\partial g} \: dt \: = \: 0\\ \frac{\partial E}{\partial t_{3}} &=&\int_{{t_{3}^{A}}}^{\infty} \: 2 \left\{ \left(H(t)  H^{A}(t) \right)\: + \: \left(H(t)  H^{B}(t) \right) \right\}\\&&\frac{\partial H}{\partial t_{3}} \: dt \: = \:0\\ \end{array} $$
We calculate
$$\begin{array}{@{}rcl@{}} \frac{\partial H}{\partial g} &=& (V_{4}  V_{3}) (t  t_{3}) \: sech^{2} \biggl (g (t  t_{3}) \biggr)\\ \frac{\partial H}{\partial t_{3}} &=& (V_{4}  V_{3}) \: g \: sech^{2} \biggl (g (t  t_{3}) \biggr) \end{array} $$
Thus, we find
$$\begin{array}{@{}rcl@{}} 0 &=& \int_{{t_{3}^{A}}}^{\infty} \: \left\{\left(H(t)  H^{A}(t) \right) \: + \: \left(H(t)  H^{B}(t) \right) \right\}\\&&\quad\times(V_{4}  V_{3}) (t  t_{3}) \: sech^{2} \left(g (t  t_{3}) \right) \: dt\\ 0 &=& \int_{{t_{3}^{A}}}^{\infty} \: \left\{ \left(H(t)  H^{A}(t) \right) \: + \: \left(H(t)  H^{B}(t) \right) \right\}\\ &&\quad\times(V_{4}  V_{3}) \: g \: sech^{2} \left(g (t  t_{3}) \right) \: dt. \end{array} $$
This implies
$$\begin{array}{@{}rcl@{}} 0 &=& \int_{{t_{3}^{A}}}^{\infty} \: \left\{ \left(H(t)  H^{A}(t) \right) \: + \: \left(H(t)  H^{B}(t) \right) \right\}\\ &&\quad t \: sech^{2} \left(g (t  t_{3}) \right) \: dt\\ & & \:  \: t_{3} \: \int_{{t_{3}^{A}}}^{\infty} \: \left\{ \left(H(t)  H^{A}(t) \right) \: + \: \left(H(t)  H^{B}(t) \right) \right\}\\&&\quad sech^{2} \left(g (t  t_{3}) \right) \: dt\\ 0 &=& \int_{{t_{3}^{A}}}^{\infty} \: \left\{ \left(H(t)  H^{A}(t) \right) \: + \: \left(H(t)  H^{B}(t) \right) \right\}\\&&\quad sech^{2} \left(g (t  t_{3}) \right) \: dt. \end{array} $$
This clearly can be simplified to
$$\begin{array}{@{}rcl@{}} 0 &=& \int_{{t_{3}^{A}}}^{\infty} \left\{ \left(H(t)  H^{A}(t) \right) + \left (H(t)  H^{B}(t) \right) \right\} t\\&&sech^{2} \left (g (t  t_{3}) \right) dt \end{array} $$
((29))
We can then satisfy equation 28 and equation 29 by making
$$\begin{array}{@{}rcl@{}} (H(t)  H^{A}(t)) \: + \: (H(t)  H^{B}(t)) &=& 0. \end{array} $$
((30))
Equation 30 can be rewritten as
$$\begin{array}{@{}rcl@{}} 0 &=& \left (V_{3}  \frac{{V_{3}^{A}} + {V_{3}^{B}}}{2} \right) \: + \: (V_{4}  V_{3}) \tanh \left (g (t  t_{3}) \right) \\ & &  \: \frac{{V_{4}^{B}}{V_{3}^{B}}}{2} \: \tanh \left (g^{B} \left(t  {t_{3}^{B}}\right) \right) \:  \: \frac{{V_{4}^{A}}{V_{3}^{A}}}{2}\\ &&\tanh \left (g^{A} \left(t  {t_{3}^{A}}\right) \right) \end{array} $$
((31))
This equation is true as t→∞. Thus, we obtain the identity
$$\begin{array}{@{}rcl@{}} 0 &=& \left (V_{3}  \frac{{V_{3}^{A}} + {V_{3}^{B}}}{2} \right) \: + \: \left (V_{4}  V_{3} \right) \:  \: \frac{{V_{4}^{B}}{V_{3}^{B}}}{2}\\ && \: \frac{{V_{4}^{A}}{V_{3}^{A}}}{2} \end{array} $$
Upon simplification, we find \(0 = V_{3}  \frac {{V_{3}^{A}} + {V_{3}^{B}}}{2}\) and \(0 = V_{4}  \frac {{V_{4}^{A}} + {V_{4}^{B}}}{2}\). This leads to our choices for V
_{3} and V
_{4}. \(V_{3} = \frac {{V_{3}^{A}} + {V_{3}^{B}}}{2}\) and \(V_{4} = \frac {{V_{4}^{A}} + {V_{4}^{B}}}{2}\) Equation 31 is also true at \(t = {t_{3}^{A}}\) and \(t = {t_{3}^{B}}\). This gives
$$\begin{array}{@{}rcl@{}} 0 &=& \left(V_{3}  \frac{{V_{3}^{A}} + {V_{3}^{B}}}{2}\right) + (V_{4}  V_{3}) \: \tanh \left(g \left({t_{3}^{A}}  t_{3}\right) \right) \\ & & \frac{{V_{4}^{B}}{V_{3}^{B}}}{2} \: \tanh \left (g^{B} \left({t_{3}^{A}}  {t_{3}^{B}}\right) \right) \end{array} $$
((32))
$$\begin{array}{@{}rcl@{}} 0 &=& \left(V_{3}  \frac{{V_{3}^{A}} + {V_{3}^{B}}}{2}\right) + (V_{4}  V_{3}) \: \tanh \left (g \left({t_{3}^{B}}  t_{3}\right) \right) \\ && \frac{{V_{4}^{A}}{V_{3}^{A}}}{2} \: \tanh \left (g^{A} \left({t_{3}^{B}}  {t_{3}^{A}}\right) \right) \end{array} $$
((33))
For convenience, define \(w_{34}^{A} = \frac {{V_{4}^{A}}{V_{3}^{A}}}{2}\) and \(w_{34}^{B} = \frac {{V_{4}^{B}}{V_{3}^{B}}}{2}\). Then, using equation 32 and equation 33, we find
$$\begin{array}{@{}rcl@{}} 0 &=& (V_{4}  V_{3}) \tanh \left (g \left({t_{3}^{A}}  t_{3}\right) \right)\\ && \: w_{34}^{B} \tanh \left (g^{B} \left({t_{3}^{A}}  {t_{3}^{B}}\right) \right)\\ 0 &=& (V_{4}  V_{3}) \tanh \left (g \left({t_{3}^{B}}  t_{3}\right) \right)\\&& \: w_{34}^{A} \tanh \left (g^{A} \left({t_{3}^{B}}  {t_{3}^{A}}\right) \right) \end{array} $$
This is then rewritten as
$$\begin{array}{@{}rcl@{}} \tanh\left(g \left({t_{3}^{A}}  t_{3}\right)\right) &=& \frac{w_{34}^{B} \: \tanh \left(1 g^{B} \left({t_{3}^{A}}  {t_{3}^{B}}\right) \right)}{\left(V_{4}  V_{3}\right)}\\ \tanh\left(g \left({t_{3}^{B}}  t_{3}\right)\right) &=& \frac{w_{34}^{A} \: \tanh \left(g^{A} \left({t_{3}^{B}}  {t_{3}^{A}}\right) \right)}{(V_{4}  V_{3})} \end{array} $$
Defining
$$\begin{array}{@{}rcl@{}} z_{A} &=& \frac{w_{34}^{B} \tanh \left (g^{B} \left({t_{3}^{A}}  {t_{3}^{B}}\right) \right)}{V_{4}  V_{3}} \: \text{and }\\ z_{B} &=& \frac{w_{34}^{A} \tanh \left (g^{A} \left({t_{3}^{B}}  {t_{3}^{A}}\right) \right)}{V_{4}  V_{3}} \end{array} $$
we find that the optimality conditions have led to the two nonlinear equations for g and t
_{3} given by
$$\begin{array}{@{}rcl@{}} \tanh \left (g \left({t_{3}^{A}}  t_{3}\right) \right) &=& z_{A} \: \text{and}\\ \tanh \left (g \left({t_{3}^{B}}  t_{3}\right) \right) &=& z_{B} \end{array} $$
((34))
Note we thus have
$$\begin{array}{@{}rcl@{}} z_{A} &=& \frac{w_{34}^{B} \: \tanh \left (g^{B} \left({t_{3}^{A}}  {t_{3}^{B}}\right) \right)}{V_{4}  V_{3}} = \\&&\frac{w_{34}^{B} \: \tanh \left (g^{B} \left({t_{3}^{B}}  {t_{3}^{A}}\right) \right)}{w_{34}^{A} + w_{34}^{B}}\\ z_{B} &=& \frac{w_{34}^{A} \: \tanh \left(g^{A} \left({t_{3}^{B}}  {t_{3}^{A}}\right) \right)}{V_{4}  V_{3}}\\&=& \frac{w_{34}^{A} \: \tanh \left(g^{A} \left({t_{3}^{B}}  {t_{3}^{A}}\right) \right)}{w_{34}^{A} + w_{34}^{B}} \end{array} $$
Hence,
$$\begin{array}{@{}rcl@{}} z_{A} &>& \frac{w_{34}^{B}}{w_{34}^{A} + w_{34}^{B}} \: > \: 1 \: ~\text{and}~ z_{B} < \frac{w_{34}^{A}}{w_{34}^{A} + w_{34}^{B}} \: < \: 1 \end{array} $$
so that z
_{
A
}<0<z
_{
B
}. It seems reasonable that the optimal value of t
_{3} should lie between \({t_{3}^{A}}\) and \({t_{3}^{B}}\). Note equations 34 precludes the solutions \(t_{3} = {t_{3}^{A}}\) or \(t_{3} = {t_{3}^{B}}\). To solve the nonlinear system for g and t
_{3}, we will approximate tanh by its first order Taylor Series expansion. This seems reasonable as we don’t expect \(g ({t_{3}^{A}}  t_{3})\) and \(g ({t_{3}^{A}}  t_{3})\) to be far from 0. This gives the approximate system
$$\begin{array}{@{}rcl@{}} g \left({t_{3}^{A}}  t_{3} \right) &\approx& z_{A} \: \text{and } g \left({t_{3}^{B}}  t_{3} \right) \approx z_{B} \end{array} $$
((35))
Using these, we find \(g = \frac {z_{B}}{{t_{3}^{B}}  t_{3}}\) and we obtain \(\frac {z_{B}}{{t_{3}^{B}}  t_{3}} \: \left ({t_{3}^{A}}  t_{3}\right) = z_{A}\). This can be simplified as follows:
$$\begin{array}{@{}rcl@{}} \frac{{t_{3}^{A}}  t_{3}}{{t_{3}^{B}}  t_{3}} &=& \frac{z_{A}}{z_{B}}, \: \left({t_{3}^{A}}  t_{3}\right) \: z_{B} = \left({t_{3}^{B}}  t_{3}\right) \: z_{A} \: \text{and }\\ &&{t_{3}^{A}} z_{B}  {t_{3}^{B}} z_{A} = t_{3} \: (z_{B}  z_{A}) \end{array} $$
Thus, we find the optimal value of t
_{3} is approximately
$$\begin{array}{@{}rcl@{}} t_{3} &=& \frac{{t_{3}^{A}} z_{B}  {t_{3}^{B}} z_{A}}{z_{B}  z_{A}} \end{array} $$
((36))
Using the approximate value of t
_{3}, we find the optimal value of g can be approximated as follows:
$$\begin{array}{@{}rcl@{}} g &=& \frac{z_{B}}{{t_{3}^{B}}  \frac{{t_{3}^{A}} z_{B}  {t_{3}^{B}} z_{A}}{z_{B}  z_{A}}} = \frac{z_{B} (z_{B}  z_{A})}{{t_{3}^{B}} (z_{B}  z_{A})  \left({t_{3}^{A}} z_{B}  {t_{3}^{B}} z_{A}\right)} \\ &=& \frac{z_{B} (z_{B}  z_{A})}{{t_{3}^{B}} z_{B}  {t_{3}^{A}} z_{B}} = \frac{z_{B}  z_{A}}{{t_{3}^{B}}  {t_{3}^{A}}} \end{array} $$
Hence, we find the approximate optimal value of g is
$$\begin{array}{@{}rcl@{}} g &=& \frac{z_{B}  z_{A}}{{t_{3}^{B}}  {t_{3}^{A}}} \end{array} $$
((37))
It is easy to check that this value of t
_{3} lies in \(\left ({t_{3}^{A}}, {t_{3}^{B}}\right)\) as we suspected it should and that g is positive. We summarize our results. Given two input BFVs, the sigmoid portions of the incoming BFVs combine into the new sigmoid given by
$$\begin{array}{@{}rcl@{}} & & H(t) = V_{3} + (V_{4}  V_{3}) \: \tanh \left (g (t  t_{3}) \right)\\ & & H(t) = \frac{{V_{3}^{A}} + {V_{3}^{B}}}{2} + \\ & & \left (\frac{{V_{4}^{A}}  {V_{3}^{A}}}{2} + \frac{{V_{4}^{B}}{V_{3}^{A}}}{2} \right)\\ &&\tanh \left (\frac{z_{B}  z_{A}}{{t_{3}^{B}}  {t_{3}^{A}}} \left (t  \frac{{t_{3}^{A}} z_{B}  {t_{3}^{B}} z_{A}}{z_{B}  z_{A}} \right) \right) \end{array} $$
Given an input sequence of BFV’s into a port on the dendrite of an accepting neuron {V
_{
n
},V
_{
n−1},…,V
_{1}} the procedure discussed above computes the combined response that enters that port at a particular time. The inputs into the dendritic system are combined pairwise; V
_{2} and V
_{1} combine into a V
_{
new
} which then combines with V
_{3} and so on. We can do this at each electrotonic location.
Preneurons can supply input to the dendrite cable at electronic positions w=0 to w=4. These inputs generate an ESP or ISP via many possible mechanisms or they alter the structure of the dendrite cable itself by the transcription of proteins. The output of a preneuron is a BFV which must then be associated with an abstract trigger as we have discussed in earlier chapters. The strength of a BFV output will be estimated as follows: The area under the first parabolic cap of the BFV can be approximated by the area of the triangle, A, formed by the vertices (t
_{0},V
_{0}),(t
_{1},V
_{1}),(t
_{2},V
_{2}). This area is shown in Figure 13. The area is given by \(A = \frac {1}{2}(V_{2} V_{0})(t_{2}  t_{0})\).
The size of this area then allows us to determine the first and second messenger contributions this input makes to the postneuron.