Skip to main content

Integrative synchronization mechanisms in connectionist cognitive neuroarchitectures

Abstract

Based on the mathematics of nonlinear Dynamical System Theory, neurocognition can be analyzed by convergent fluid and transient neurodynamics in abstract n-dimensional system phase spaces in the form of nonlinear vector fields, vector streams or vector flows (the so-called “vectorial form”). This processual or dynamical perspective on cognition, including the dynamical binding mechanisms in cognitive neuroarchitectures, has the advantage of a more accurately modeling of the transient cognitive processes. Thus, neurocognition can be considered as being organized by integrative synchronization mechanisms which best explain the liquid flow of neurocognitive information orchestrated in a network of positive and/or negative feedback loops in the subcortical and cortical areas. The human neurocognitive system can be regarded as a nonlinear, dynamical and open nonequilibrium system. This new fluid or liquid perspective in cognitive science and cognitive neuroscience can be regarded as a contribution towards bridging the gap between the discrete, abstract symbolic description of propositions in the mind, and their continuous, numerical implementation in self-organizing neural networks modelling the neural information processing in the human brain.

Introduction

One of the core themes in cognitive science consists in the endeavour to achieve an integrated theory of cognition, which requires integrative mechanisms explaining how the information processing occurring simultaneously in spatially segregated (sub-)cortical areas is coordinated and bound together to give rise to coherent perceptual and symbolic representations (Engel and Singer 2001; Singer 2013a). This so-called “(general) binding problem” (Hardcastle 1998; Hummel 1999; Singer 1999a; Sougné 2003; von der Malsburg 2001), that is, the problem of dynamically representing conjunctions of informational elements, from the most basic perceptual representations (“feature binding”) to the most complex cognitive representations like symbol structures (“variable binding”), appears to be solved by temporal integrative mechanisms. In other words, one of the coordinating mechanisms appears to be the temporal synchronization of neural phase activity based on dynamical self-organizing processes in neural networks. In what follows several theoretical models in neuroinformatics, in cognitive science, in cognitive and computational neuroscience are presented which use this mechanism of temporal synchrony against the background of connectionism, the theory of non-linear dynamical systems, and the self-organization paradigm.

First, the term “(integrative) synchronization mechanism” may be defined briefly as follows, referring to the mathematical concept of an algebraic structure: A (causal) structure, realized by a function, wherein an operand or a component, for example, a semantic concept in language processing or an object feature in perception, is set in relation to another operand or component, for example to a syntactic position or to another object feature, via a synchronous operation. This operation may consist of a vector multiplication, as it has been introduced in the activation, propagation and learning functions of connectionism (see Section “Basic computational principles in connectionism”), for example in the “Tensor Product Representation” in Section “Integrated Connectionist/Symbolic (ICS) Cognitive Architecture” with a synchronous tensor product operation, where a semantic constituent, realized by a filler vector, is bound to a syntactic position, realized by a role vector, or in the “Holographic Reduced Representations” in Section “Holographic Reduced Representations (HRRs)” with a synchronous circular convolution operation.

In Section “Basic computational principles in connectionism”, the standard computational principles in connectionism are reviewed from the scientific literature in computer science and in theoretical neurophilosophy. In the following Section “Review: The binding problem in the cognitive neurosciences: binding-by-synchrony mechanism”, the binding problem in the cognitive neurosciences is reviewed with focus on the binding-by-synchrony hypothesis in neurophysiology. After that, theoretical models and cognitive neuroarchitectures are considered that use integrative synchronization mechanisms solving the binding problem in low-level cognition (Section “Review: Modeling integrative synchronization mechanisms in low-level cognition”) and in high-level cognition (Section “Review: Modeling integrative synchronization mechanisms in high-level cognition”). Finally, in Section “Conclusions”, these computational cognitive neuroarchitectures in modern connectionism are discussed, as well as their implications for the future of cognitive science and neurophilosophy (Section “Outlook on future research”).

Basic computational principles in connectionism

Since the 1980s, when the theory of artificial neural networks (Haykin 2009) was emerging, in cognitive science two alternative paradigms were pursued to model cognition. On the one hand, the classical symbolic theory, so-called “symbolism” (Fodor and Pylyshyn 1988a), regards symbol processing as the suitable model of cognition, that is, the serial, syntactical and universal transformation of discrete elementary symbols in complex symbol structures by means of computational algorithms. On the other hand, so-called “connectionism” (Bechtel and Abrahamsen 2002; Clark 2001; Garson 2015) regards parallel and distributed information processing in the form of vector and tensor constructions as the suitable model of cognition, that is, the application of artificial neural networks with architectures possessing a high grade of neurobiological plausibility (“brain style modeling”).

An “Artificial Neural Network (ANN)” may be considered as an directed and weighted mathematical graph: It consists of relatively simple (processing) units, the so-called “nodes,” which are technical neurons and are wired with one another through weighted connections, the so-called “edge.”

The neurons compute their actual state of activation, consisting of a numerical activation value, by means of an activation function, conditioned on the previous state of activation a j (t), the net input n e t j (t) and the threshold θ j :

$$ a_{j} (t + 1) = f_{act} [a_{j} (t), {net}_{j}(t), \theta_{j}]. $$
(1)

Thus, the net input is computed by means of the propagation function, that is, the sum of the products of the respective connection weights toward the presynaptic neurons with their outputs: if the previous state of activation of the postsynaptic neuron together with it’s net input now exceeds its threshold, the postsynaptic neuron becomes active:

$$ {net}_{j} (t) = \sum\limits_{i} o_{i}(t) w_{ij}. $$
(2)

The core of neural network theory is the introduction of (synaptic) learning rules, that is, rules for the change of synaptic weights as a function of the network’s activation state. Thus, a learning rule is an algorithm, according to which an artificial neural network learns to yield the desired output for a given input. The “Hebb rule,” named after the Canadian psychologist Donald O. Hebb (Hebb 1949), which is the fundament of the principle of neural plasticity, says: The synaptic weight is increased if the pre- and postsynaptic neurons are active in subsequent time steps, i.e. if the presynaptic neuron fires and the synapse is “successful” in the sense that the postsynaptic neuron fires as well:

$$ \Delta w_{ij} = \eta o_{i} a_{j}, $$
(3)

where Δ w i j is the change of the connection weight w i j , η is a constant learning rate, o i is the output of the presynaptic neuron i and a j is the activation of the postsynaptic neuron j.

The information of a connectionist system is coded by “distributed representations,” described through the connection matrix of the network, that is, the presence of a given (sensory) information element may be determined by the activation pattern distributed over a set of neurons, in which the activity of a single neuron is part of the representation of many alternative information elements (“Parallel Distributed Processing (PDP)”)(McClelland et al. 1986).

Self-organization in neuroinformatics

In the following the Self-Organizing (Feature) Map (SO(F)M) (Kohonen 1982; 2001a) is described, also called “Kohonen map,” named after the Finnish engineer Teuvo Kohonen, which simulates the information processing of the pyramidal cells in the cortex. A Kohonen map consists of an input-layer and a “Kohonen layer”: Thus, all neurons of the input layer are connected in parallel via variable weight vectors, also called a “reference vectors” or a “synapse vectors,” to all neurons of the competitive layer, also denoted the “Kohonen layer.” That means that all competitive neurons receive the same input signals.

At the beginning of the training, the weights of the synapse vectors are randomly imposed. If an arbitrary input pattern is presented, each neuron of the Kohonen layer receives a (vector) copy of this pattern, modified by the different synaptic weights, such that the neurons of the Kohonen layer differ in the degree to which they are excited: Thus, that neuron will “win,” that is, will be most excited, the synapse vector of which best matches the input vector. The “Best-Matching Unit (BMU),” denoted as c, is defined by the minimal Euclidean difference between the input vector x(t) and the corresponding reference vectors m i (t):

$$ c = \arg \min\limits_{i} \{ \left\| x(t) - m_{i}(t) \right\| \} $$
(4)

Because the “matching” is not total, however learning occurs: The synapse vector of the BMU is now shifted toward the input vector by a small amount. To a smaller extent the synapse vectors of the neurons within the BMU’s “neighborhood” are also shifted toward the input vector. As a result, the input patterns which are similar to that pattern represented by the BMU are represented with a higher degree of probability in the neighborhood of the BMU.

The adaptation of the BMU and its topological “neighbors” is increased in the direction of the actual input vector in the scope of the “neighborhood function” h c i :

$$ m_{i}(t + 1) = m_{i}(t) + h_{ci}(t) [x(t) - m_{i}(t)], $$
(5)

for example with

$$ h_{ci, gauss} (t) = {{1} \over {\sigma (t) \sqrt{2\pi}}} \cdot {\exp \left({-{ \left\| r_{c} - r_{i} \right\|^{2}} \over {2\sigma^{2} (t)}}\right)} $$
(6)

In other words, the BMU “excites” other neurons within a specific environment and “inhibits” more distant neurons according to the “principle of lateral inhibition.” On the basis of this network architecture, the neurons of the Kohonen layer can adjust their synapse weights by self-organization in such a way that a topographical feature map is to be formed. That means that certain features of the input pattern are mapped in a regular manner onto a certain network location, such that similar input patterns are represented close together and input patterns that are often mapped are represented in a larger area.

Review: The binding problem in the cognitive neurosciences: binding-by-synchrony mechanism

The general “binding problem” (Treisman 1996; von der Malsburg 1999; 2001) in the cognitive neurosciences consists in identifying mechanisms that integrate neural processes in order to generate coherent perceptual impressions by which, for instance, sensory information in visual perception is structured in such a manner that it can be “bound” together into coherent perceptual impressions. In other words, one has to detect which neurophysiological mechanisms of feature binding and of Gestalt laws are active in perceiving the environment, to determine what elementary object property and object relations must be combined together such that a visual situation can be adequately analyzed and represented (“scene analysis”).

Since the 1980s the connectionist model of “population coding” (Singer 2002) has developed into the neurophysiological theory of perception, also called the “assembly model (Singer 1999a).” It holds that the elementary object properties and the complex objects in the visual cortex are represented by means of populations of temporal synchronously active neurons, the so-called “(cell) assemblies”: According to this “binding-by-synchrony hypothesis” (Singer 1999b; 2009; 2013a,b) developed by Wolf Singer and his former collaborators Andreas K. Engel and Peter König, one has to regard these cell assemblies of coherently active neurons as the fundamental units of information processing in the cortex. Thus, the assembly model holds that those sensory neurons activated by the same object are bound together through a temporal phase synchronization of their oscillatory impulses down to a few milliseconds, and thereby constitute populations of neurons, namely the “(cell) assemblies”, in such a manner that a coherent percept can be constructed. Thus, an adequate mapping of contours to a specific object, for example, can be performed in scene analysis, as proposed at the beginning of the the 1980s by Christoph von der Malsburg with his so-called “correlation theory of brain function (von der Malsburg 1981/1994).”

In a hugh number of animal experiments, especially on cats, but also on human experiments (an overview can be found in Singer 1999b; 2009; 2013a,b) it has been demonstrated through “cross-correlation analysis” (Engel et al. 1990) that the neurons in the visual cortex synchronize their action potentials with a precision of a few milliseconds. Thus, they can be combined to assemblies, not only within particular cortical columns and cortical areas, i.e. of the primary visual area V1, but also between the different visual areas within a hemisphere. These synchronization processes are predominantly observed in a specific frequency band, the so-called “gamma-band (Fries et al. 2007),” that is in a frequency range about 30–90 Hz. It’s an outstanding question whether neuronal synchrony is a relevant causal mechanism for the phenomenon of (conscious) perceptual binding or a statistical (cor-)relation, and so only a distal “marker” of binding (Hardcastle 1999).

Review: Modeling integrative synchronization mechanisms in low-level cognition

In what follows, a cognitive neuroarchitecture is presented that solves the binding problem in perception (“low-level cognition”), that is, the problem how elementary object features and object relations, like the object color or the object form, can be dynamically bound together or can be integrated to a representation of this perceptual object by means of a synchronization mechanism (“feature binding”, “feature linking”).

Oscillatory networks

According to Fodor and Pylyshyn (1988b), in order to build an adequate theory of human cognition, one has to explain four empirical phenomena: the productivity, systematicity and compositionality of human language and the systematicity of inference. Werning (2001) argues that the problem of semantic compositionality (see Glossary) - i.e., the fact that the meaning of a complex term is a syntax-dependent function of the meanings of the particular syntactic constituents of this complex term - requires that a neural architecture preserve the causal relations among the constituent terms within a language. Establishing these constituent relations enables a cognitive system to compose complex representations (cf. the so-called “binding problem”), involving a synchronic relation, namely the relation of synchrony between phases of neural activity, which can be defined by so-called “Oscillatory Networks (Werning 2001).”

Oscillatory Networks can now be given an abstract algebraic description, denoted as algebra N, which is based on only one fundamental operation: being synchronous with, which relates the phases of neunal activity and is referred to by the operation symbol “ ≈N”. The primitive entities of the algebra are (1) just the phases of neural activity \({\varphi _{1}^{N}}, \ldots, {\varphi _{m}^{N}}\), and (2) the sets of phases \({F_{1}^{N}}, \ldots, {F_{n}^{N}}\) related to each collection of neurons which indicate a certain feature in their receptive field. This notation of the algebra N is isomorphic to a compositional and systematic language, defined as an algebra L, which is also based on only one fundamental operation: being the same as, denoted by the symbol “ ≈L.” This operation relates the indexical expressions like “this” and “that”, whereby the primitive entities of the algebra L are (1) just this specific indexicals \({\varphi _{1}^{L}}, \ldots, {\varphi _{m}^{L}}\), and (2) specific predicates \({F_{1}^{L}}, \ldots, {F_{n}^{L}}\).

The neural representation of an elementary predication F(a) can now described as follows: If a collection of sensory neurons which indicate the same property of an object in their receptive fields and to which a set of phases is assigned show therefore a certain phase of activity one can say that the synchronous phase φ 1 of one of these neurons is an element of the set of phases \({F_{j}^{N}}\). To refer to this neural state, the relation of pertaining ε is defined:

$$ [\varphi_{i} \, \varepsilon \, F_{j}]^{N} \text{is the neuronal state} [(\exists x)(x \approx \varphi_{i} \; \& \; x \, \epsilon \, F_{j})]^{N}. $$
(7)

This is isomorphic to if one links an indexical expression via the element relation to a predicate:

$$ [\varphi_{i} \, \varepsilon \, F_{j}]^{L} \text{is the clause} [(\exists x)(x \approx \varphi_{i} \; \& \; x \, \epsilon \, F_{j})]^{L}. $$
(8)

The process of predication can only be done if both the phase of activity and the collection of neurons which indicate a property to which a set of phases is assigned must be tokened. This is the case because the phase cannot pertain to the collection unless both the phase and the collection occur in the cortex. Thus, the required causal constituent relation between the primitive terms and the complex term is guaranteed such that oscillatory networks are not only syntactically, but also semantically compositional.

The Oscillatory Networks model with its neural structures in the form of synchronous oscillations has been refined by Werning and Maye (2007; 2005a; 2012a) to a so-called “(neuro-)emulative semantics”, which is a neurobiologically plausible, compositional semantics for a monadic first-order predicate language. It is also a non-symbolic semantics because it violates the principle of semantic constituency (see Glossary). Thus, the feature binding in low level cognition is modeled by means of an integrative, dynamical synchronisation mechanism in the form of dominating oscillation functions, so-called “eigenmodes,” (see Glossary) in the scope of Hilbert space analysis with a high degree on neurobiological plausibility: The computer-simulation model of oscillatory networks represents a dynamical snapshot of a particular visual scene in the short-term memory constituting two different, simultaneously given perceptual objects, for example a vertical red bar and a horizontal green bar. The meaning of a sentence which has reference to a situation in the world is a set of eigenmodes. Each eigenmode, in the form of an eigenvector which does not interfere with other eigenmodes, describes the phase synchronous oscillation of a subset of oscillators between different feature layers, for example the layers for “red” and “vertical,” which guarantees the internal representation of a perceptual object. In other words, the dynamics of this neuroarchitecture is governed by a few dominating oscillation functions which reliably co-vary with perceptual objects because the oscillators of different feature layers that synchronize in phase represent properties of the same object. Thus, this synchronization process “binds” the features “together.”

Review: Modeling integrative synchronization mechanisms in high-level cognition

In what follows, several cognitive neuroarchitectures are presented which solve the binding problem in language processing (“high-level cognition”), that is, the problem how semantic concepts and syntactic roles can be dynamically bound together or can be integrated to complex cognitive representations like symbol structures and propositions by means of a synchronization mechanism (“variable binding").

Integrated Connectionist/Symbolic (ICS) Cognitive Architecture

In the so-called “Integrated Connectionist/Symbolic (ICS) Cognitive Architecture” Smolensky (2006a; 2006b) aims at the integration of symbolic and connectionist forms of mental representations. As for complex representations, the formal structure is attained as follows: If one combines several symbols to an unstructured collection of elements, this constituent combination takes place through pattern superposition using the vector sum operation of activation vectors (“superposition principle”). However, if one combines several symbols to true complex symbol structures, one has to consider the different syntactic position or structural role that a symbol token can occupy in the overall structure. Thus, such a symbolic structure is realized by a connectionist activation vector, the so-called “tensor product representation” (Petitot 1994; Smolensky 1990; Smolensky and Legendre 2006a), which codes the syntactic position of a connectionist constituent in its overall structure in such a manner that this position corresponds to the syntactic position of a symbolic constituent in a binary parse tree, whereby recursive connectionist structures can be built, and, as Smolensky argues, systematicity can be guaranteed.

Such a symbolic structure s is defined by a set of structural roles {r i } as variables, which, for each single instance of the structure, may be individually occupied by single fillers {f i } as values, which therefore individuates the structure. Thus, the symbolic structure s consists in a set of symbolic constituents, each of which corresponds to a filler/role binding f i /r i , respectively. This filler/role binding f/r is now realized by a binding vector b=f / r consisting in the tensor product of the filler vector f, which realizes a filler f, and the role vector r, which realizes a role r, so that b=f / r=fr. Thus, the connectionist realization of a symbolic structure s corresponds to an activation vector

$$ \mathbf{s} = \sum_{i} \mathbf{f}_{\mathbf{i}} \otimes \mathbf{r}_{\mathbf{i}} $$
(9)

consisting in the vector sum of the binding vectors, insofar as one identifies the structure s with the conjunction of the fillers/roles bindings f i /r i . Smolensky offers an example: The proposition p “Sandy (S) loves (L) Kim (K),” following the LISP-convention, can be described by the symbolic representation p=[L,[S,K]], which mirrors in the following connectionist composite vector

$$ \mathbf{p} = \mathbf{r}_{\mathbf{0}} \otimes \mathbf{L} + \mathbf{r}_{\mathbf{1}} \otimes [\mathbf{r}_{\mathbf{0}} \otimes \mathbf{S} + \mathbf{r}_{\mathbf{1}} \otimes \mathbf{K}] $$
(10)

where the two - linear independent - role vectors r 0 and r 1 denotes the left or right branch of a binary-branching tree.

Smolensky (2006c) proposes temporal synchrony as a method to perform the dynamical binding mechanism used in the tensor product representation by analogy to the binding theory developed by von der Malsburg and Singer in cognitive neuroscience, which is based on synchronized oscillatory neural assemblies.

Holographic Reduced Representations (HRRs)

Following Smolensky’s “tensor product representation,” the so-called “Holographic Reduced Representations (HRRs)” postulated by Tony Plate (2003a) are also built from a filler/role decomposition of a set of recursive, compositional symbol structures. However, unlike the former, the latter are based on the bilinear, associative operation of circular convolution rather than the tensor product operator. Thus, avoiding the shortcomings of the tensor product representation (with tensor product representation the binding vector has n 2 elements because each role and filler vector being bound has n elements), this operation is characterized in such a manner that the binding vector resulting from two vectors with the dimensionality or “rank” n, has the same dimensionality or “rank” n, so that the length or dimensionality n of the resulting vector representation keeps constant by application to recursive structures.

Circular convolution of an n-dimensional vector \(z = x \circledast y\) - also called “Faltung” - can be considered a compression or contraction of the tensor or the outer product of two n-dimensional vectors x and y, and is defined as follows:

$$ z_{i} = \sum_{k=0}^{n-1} x_{k} \, y_{(i-k) \text{mod}\ n}. $$
(11)

In analogy to Smolensky’s tensor product operation, circular convolution is also used as a binding operation, that is, building filler/role bindings for complex recursive symbol structures (“variable binding”) (Plate 2003b). In contrast to tensor product representation any (semantic) filler vector is reconstructable (“convolution decoding”) by means of the existence of an (approximative) inverse (Plate 2003c).

Neural Engineering Framework (NEF)

This circular convolution operation is implemented in the so-called “Neural Engineering Framework (NEF)” (Eliasmith 2013; Eliasmith and Anderson 2003a; Stewart and Eliasmith 2012a, developed by Charles H. Anderson, Chris Eliasmith and Terrence C. Stewart, a neurobiologically plausible neuroarchitecture which offers a non-symbolic (because “the representations of the constituents of a structure are not present in the representation of the structure itself” (Stewart and Eliasmith 2012b)), neurally inspired theory of semantic compositionality (“variable binding”) (Stewart and Eliasmith 2012c). The NEF architecture is describable by means of three principles according to the Neural Engineering approach, namely (Eliasmith 2003; Eliasmith and Anderson 2003b): (1) A neural representation, referred to the behaviour of a neural population over a specific period of time, is defined by “the combination of nonlinear encoding” and optimally “weighted linear decoding,” (2) a transformation of a neural representation consists in the “function of a variable represented by neural populations” and is determined “using an alternately weighted linear decoding,” and (3) the neural dynamic of a neurobiological system “can be analyzed using control theory.”

To attain a neurobiologically plausible model consisting of representations with a compositional semantics, one has to calculate the optimal synaptic connection weights between neural groups in such a manner that a desired transformation function f(x) is defined (Stewart and Eliasmith 2012d). In this case, the function of a circular convolution with two variables is defined, so that the activation values a i (x) and a j (x) from these two neural groups can be bound together in the form of a filler/role binding in a compositional HRRs representation according to a so-called “optimal linear function decoder” (Eliasmith and Anderson 2003c; Stewart and Eliasmith 2012e):

$$ \hat{f}(x(t)) = \sum\limits_{i} a_{i}(x(t)) {\phi_{i}^{f}} \; \text{with} \; {\phi_{i}^{f}} = \Gamma^{-1} \Upsilon $$
(12)
$$ \text{whereby} \; \Gamma_{ij} = \int a_{i}(x) a_{j}(x) dx \; \text{and} \; \Upsilon_{j} = \int a_{j}(x) f(x) dx $$
(13)

According to Werning’s (2012b) “definiton of formal compositionality,” a semantic compositum in the neuroarchitectures of the so-called “Vector Symbolic Architectures (VSA)” (Levy and Gayler 2008), for example the ICS, HRRs and the NEF neuroarchitecture, is “a homomorphic image of the syntactic structure of the language” so that the modern principle of semantic compositionality (Hodges 2001; Werning 2012c) is fulfilled. The HRRs model and, implementing the circular convolution operation, the NEF model provide not only a compositional but also a symbolic semantics which satisfies Werning’s (2012c) “principle of semantic constituency (Werning 2012d),” because of an “algorithm of unbinding” (Werning 2012e) which approximately identifies the part-whole relation of a complex representation. These neuroarchitectures use the concept of circular convolution as an integrative binding mechanism such that, by means of the operation of unbinding, the bounded, semantic filler vector is recoverable - at least approximately. Thus, the semantic part-whole relation is preserved in the scope of a vector based constituent structure and remain in this sense “present.” Further, the NEF architecture is, to a high degree, neurobiologically plausible compared to available evidence in cognitive neuroscience because it shows robustness towards an increased loss of neural ressources (“graceful degradation”) and because its accuracy increases with growing neural ressources but decreases if structure complexity rises (Stewart and Eliasmith 2012f). Finally, this architecture can be viewed - from the perspective of philosophy of science - as an empirically well-tested theory using a wide range of measurable neuroscientific variables.

Conclusions

Based on the mathematics of nonlinear Dynamical System Theory (DST) (see Glossary) including the Paradigm of Self-Organization, the above-discussed computational cognitive neuroarchitectures in modern connectionism are characterized by a high degree of neurobiological plausibilty and describe neurocognition1 as an inherently dynamical process in the sense of van Gelder and Port (van Gelder and Port 1995). Thus, at its heart, cognition can be analyzed by convergent “fluid”2 and “transient”3 neurodynamics in abstract “n-dimensional system phase spaces” in the form of nonlinear vector fields, vector streams or vector flows.

The models presented here, based on the integrative mechanism of temporal synchrony, contribute new insights in the service of an integrated theory of cognition, and point toward a modern neurophilosophy and cognitive science as a “Unified Science of the Mind/Brain” (Churchland 2002; 2007): The Mind/Brain may be considered as one and the same nonlinear, complex dynamical system, in which information processing can be described with vector and tensor transformations and with attractors in multidimensional state spaces. This processual or dynamical perspective on cognition, including the dynamical binding mechanisms described above, has the advantage of a more accurately modeling of the fluid cognitive processes and the plasticity of the neural architecture (Smolensky 1988). As a result, neurocognition can considered as an organization of integrative system mechanisms which best explain the liquid flow of neurocognitive information orchestrated in a recurrent network of positive and/or negative feedback loops in the subcortical and cortical areas, based on the so called “vectorial form”4.

The nature of this vectorial form of neurocognitive information can be best modeled by self-excited, self-amplifying and self-sustained waveforms superimposing each other in fluid multiple-coupled feedback cycles (Abeles 1991; Bienenstock 1995; Freeman 1987, 2000a b, Kilpatrick 2015; Kohonen 2001b; Sandstede 2007; Troy 2008a,b; Werning 2012a). Thus, the neural information storage and retrieval in the long-term memory, for example, can be understand by means of computational adaptive resonance mechanisms in the dominant waveforms, or “modes” (Grossberg and Somers 1991), and by warming up and annealing of oscillation modes by streams of informational processes in the context of computational “energy functions,” like in “Harmony Theory” (Smolensky and Legendre 2006a).

Thus, the “Binding Problem” in cognitive neuroscience, that is, the problem of dynamically representing conjunctions of informational elements, from the most basic perceptual representations (“feature binding”) to the most complex cognitive representations like symbol structures (“variable binding”), appears to be solved by the temporal synchronization of neuronal phase activity based on dynamical self-organizing processes in the neuronal networks.

The human neurocognitive system can be regarded as a nonlinear, dynamical and open nonequilibrium system (Glansdorff and Prigogine 1971 Nicolis and Prigogine 1977; Schrödinger 1944/2012; von Bertalanffy 1950; 1953), which can be described in the scope of a nonequilibrium neurodynamics: in a continuous flow of information processing (“online and realtime computation” (Maass et al. 2002)) the system filters system-relative and system-relevant information in its environment, that contains a high degree of order, and does so in a manner that integrate new information optimally into the informational structures constructed up to that time (“Free-Energy Principle” (Friston 2010; Friston and Stephan 2007; Sporns 2011)). Thus, an internal neurocognitive concept consists of a dynamical process which filters out statistical prototypes from the sensorial information in terms of coherent and adaptive n-dimensional vector fields. These prototypes serve as a basis for dynamic, probabilistic predictions or probabilistic hypotheses on prospective, new data (see the recently introduced approach of “predictive coding” in neurophilosophy (Clark 2013; Hohwy 2013)).

This new fluid perspective in cognitive science and cognitive neuroscience includes that the researcher in philosophy of science make use of a mechanistic-systemic method (Bechtel 2008; Chemero and Silberstein 2008; Craver 2007; Kaplan and Craver 2011; Piccinini and Craver 2011): the temporal process mechanisms structured in the sense of the nonlinear Dynamical System Theory would be a general dynamical scheme or model which describe and explain a particular global system phenomenon on multiple system levels both under a analytical-mechanistic perspective in the form of a nonlinear nonequilibrium neurodynamics based on informational system components and under a synthetic-holonomic perspective in form of non-linear differential equation systems with global order parameters (Haken 2004).

The mathematics of nonlinear Dynamical System Theory (DST), including attractor dynamics (see Glossary) and the paradigm of self-organization, can be regarded as a contribution towards building a deeper understanding of what neural or cognitive information really is. Furthermore, these new tools shed light on how the flow of informational elements are integrated into complex systematic structures, from the most basic perceptual representations, generated by cell assemblies, to the most complex cognitive representations, like symbol structures, generated by tensor products or oscillatory representations.

Thus, these integrative mechanisms can be regarded as a contribution towards bridging the gap between the discrete, abstract symbolic description of propositions in the mind, and their continuous, numerical implementation in neural networks in the brain. This may be regarded as a step toward an embodied, fully integrated theory of cognition.

Outlook on future research

Symbolic versus sub-symbolic components in (hybrid) integrative cognitive neuroarchitectures

Bridging the gap between different levels of description, explanation, representation, and computation in symbolic and sub-symbolic paradigms of neurocognitive systems modelling one of the unsolved core issues in the cognitive sciences is the mode of (hybrid) integration between connectionist-systemtheoretical subsymbolic and logic-linguistic-based symbolic approaches. To what extent can new developments in the modelling and analysis of recurrent, self-organized ANN architectures, e.g. wave field theories of neural information processing based on travelling waves and described by nonlinear oscillations functions (first approaches see Coombes 2005; Kilpatrick 2015; Rougier and Detorakis 2013; Troy 2008a,b; Sandstede 2007)), bring together models from discrete and continuous mathematics operating both on the basis of low-level processing of perceptual information, and by performing high-level reasoning and symbol processing?

Neurocognitive integration based on self-organized, cyclical (phase) synchronization mechanisms

A key factor for the analysis of integrating information in neurocognition are self-organized phase synchronization mechanisms, as in the “binding-by-synchrony hypothesis” (Singer 1999b; 2009; 2013a,b). This raises the question to what extent the dynamic mode of combinations of synchronous process mechanisms by cascading spreading activations in upwards, downwards and sidewards feedback loops in terms of multiple cyclic graph structures can be used as a decisive criterion for a new concept of self-organization and emergence in neurocognition (see e.g. “micro-macro link (MML) problem” (Auyang 1998; Fromm 2005); “small-world networks,” “connectome” (Sporns 2011))? In the theory of dissipative self-organization one can consider as one fundamental principle of pattern formation in neurobiological dynamical systems the synergy of short-range, autocatalytic activation (excitation) trough positive feedback cycles and long-range, crosscatalytic (lateral) inhibition through negative feedback cycles. According to this principle, which organizational structure of advanced cognitive neuroarchitectures would be preferable for an improved (abstract) pattern recognition, for example, the mixture of an oscillatory network with a Kohonen map?

Fluid or liquid perspective in modeling cognitive neuroarchitectures

A special feature of the cognitive binding mechanisms in human-level intelligence is their fluid and transient character. What contribution can make newer dynamic algorithmic methods (“liquid computing”: (Maass 2007; Maass et al. 2002); “reservoir computing”: (Jaeger 2002/2013; Lukoševičius and Jaeger 2009; Lukoševičius et al. 2012), “deep (machine) learning”: (Schmidhuber 2014)) or combination of models from connectionism and Dynamic System Theory ((Spencer et al. 2009); Dynamic Field Theory (DFT): (Lipinsky et al. 2012)) in the analysis and modeling of recurrent cognitive neuroarchitectures?

Abstract neurocognitive systems incorporating embodied human-level cognition and intelligence

A rapprochement between the dynamic system theory and (embodied) connectionism takes place further on “Evolutionary Robotics” and “Developmental Robotics” (Cangelosi and Schlesinger 2015; Rempis et al. 2013; Schlesinger 2009). This raises the question to what extent this progress that robotics researchers have made toward a hybrid embodied approach can contribute new insights in solving the binding problem in embodied cognitive science (Barsalou 1999; Franklin 2013) with special consideration to these neurocognitive integrative mechanisms discussed above implemented in robots and androids acting as agents in complex (developmental and social) situations?

Glossary

Attractor dynamics: describes convergent system processes to relatively invariant, stable system states, the so-called attractors, with a corresponding attractor basin that correspond, geometrically interpreted, a region in phase space in which neighboring trajectories asymptotically are heading for from a wide variety of starting points in a given environment, in other words, it’s a region that attracts these trajectories. In formal notation: Given an n-fold iteration of a transformation function f n with f n(x 1)=x n +1 with \(n \in \mathbb {Z^{+}}\) and xX then one call a compact, invariant and attractive set AX as an attractor if there exists a (fundamental) neighborhood U of A, so that:

$$ {\lim}_{n \rightarrow \infty} d(f^{n} (x), A) = 0 ~ \forall x \in U,~ U ~ \text{neighborhood~of}~ A \subseteq X $$
(14)

with the two properties

$$ \begin{array}{ll} (1)& \equiv_{n \geq 0} f^{n} (U) = A \\ (2) & f(\bar{U}) \subseteq U~ \text{with}~ (\bar{U}): \text{closure~of~U}. \end{array} $$
(15)

(See for details (Devaney 1994))

(nonlinear) Dynamical System Theory (DST): is an area of mathematics used to describe the behavior of complex dynamical systems, usually by employing a system of (nonlinear) differential equations:

$$\begin{array}{lll} {x_{1}'(t)} &=& {f_{1}(x_{1}(t), \ldots, x_{d}(t))} \\ &\vdots& \\ {x_{d}'(t)} &=& {f_{d}(x_{1}(t), \ldots, x_{d}(t))}. \end{array} $$

The formal definition of a dynamic system with a large number of n elements consists of (1) an abstract, d-dimensional phase space or state space X of which d system variables x 1(t),…,x d (t) in terms of vector coordinates the system state x(t) in its course over time t fully explain, and (2) a dynamic transformation function f, which determines the changes of all state variables in the time and the system state.

(See for details (Hall and Fagen 1968))

Eigenmode: It is according to the basics of Synergetics (Haken 2004) that the dynamics of a complex oscillating system, e.g. an oscillatory ANN, is often governed by a few stable, dominating oscillations which are the so-called “eigenmodes” or “principal modes” of the system, and can, therefore, be described by a small set of corresponding order parameters. The corresponding eigenvalues designate how much of the variance is accounted for by an eigenmode. The explanatory power of the eigenmodes relies on the simultaneous analysis of a large number of neurons or neuronal populations: “Another way of describing oscillatory network activity by superposition of eigenstates is to determine the principal components of the activity based on a numerical simulation of the network. This is possible for arbitrary stimuli. Computationally, the principal components are eigenvectors of the covariance matrix C:

$$\mathbf{D} = \left[ \begin{array}{clcl} x_{1}(t_{1}) & x_{1}(t_{2}) & \dots & x_{1}(t_{m}) \\ x_{2}(t_{1}) & x_{2}(t_{2}) & \dots & x_{2}(t_{m}) \\ \vdots & \vdots & \ddots & \vdots \\ x_{n}(t_{1}) & x_{n}(t_{2}) & \dots & x_{n}(t_{m}) \end{array} \right] $$
$$\begin{array}{@{}rcl@{}} \mathbf{C} &=& \mathbf{D}\mathbf{D}^{T}\\ \mathbf{V} \blacksquare \mathbf{V}^{-1} &=& \mathbf{C} \end{array} $$

Matrix D contains the activity of oscillators at equidistant time points. V is the matrix of eigenvectors and the diagonal matrix ■ contains the corresponding eigenvalues. The eigenmodes constitute an orthonormal coordinate system in which the variance of the network activity in each direction is determined by the magnitude of the respective eigenvalues. The network activity can be described by a superposition of the eigenmodes v i with time- dependent weights c i (t):

$$ \mathbf{x} (t) = \sum_{i} {c_{i}(t)} \mathbf{v_{i}} $$
(16)

The weights c i (t) are determined by projecting the network activity on the respective eigenmode i:

$$ c_{i}(t) = \mathbf{x} (t)^{T} \mathbf{v_{i}}. $$
(17)

We will call the weights c i (t) characteristic functions because they correspond to distinct interpretations of the stimulus.

If functions c i (t) have a sinusoidal time course they can be expressed by \({k}_{i} \text {e}^{\lambda _{i^{t}} + \Phi _{i}}\). Here, k i is the amplitude of the oscillation and the imaginary part of the complex eigenvalues λ i is its frequency. The network activity can then be written as

$$ \mathbf{x} (t) = \sum_{i} {k}_{i} \mathbf{v_{i}} \text{e}^{\lambda_{i^{t}} + \Phi_{i}}, $$
(18)

(…).”

(See for details (Maye and Werning 2007; Werning 2005b))

Euclidean distance: According to the Pythagorean Theorem it’s in a 2-dimensional Euclidean space the distance between the points (x 1,y 1) and (x 2,y 2) and is defined by

$$ d(x,y) = \left\| x - y \right\| = \sqrt {(x_{2} - x_{1})^{2} + (y_{2} - y_{1})^{2}}. $$
(19)

.

In general it’s defined by

$$ d(x_{i},x_{j}) = \left\| x_{i} - x_{j} \right\| = \sqrt {\sum\limits_{k = 1}^{m} (x_{ik} - x_{jk})^{2}}. $$
(20)

(See for details (Hair et al. 2010))

Formal compositionality: The definition of formal compositional of a language’s semantics - in the sense of a homomorphism between two algebraic structures, the syntactic structure 〈T,Σ T 〉 of a language and its semantic structure 〈M,Σ M 〉 reads as follows: “(Formal Compositionality) Given a language with the syntax 〈T,Σ T 〉, a meaning function μ:TM is called compositional just in case, for every n-ary syntactic operation σΣ T and any sequence of terms t 1,…,t n in the domain of σ, there is a partial function m σ defined on M n such that

$$ \mu(\sigma(t_{1}, \ldots,t_{n})) = m_{\sigma}(\mu(t_{1}), \ldots, \mu(t_{n})) $$
(21)

A semantics induced by a compositional meaning function will be called a compositional semantics of the language.” (See for details (Werning 2012b))

Semantic constituency: “(…) (Semantic constituency) There is a semantic part-whole relation on the set of meanings such that for every two terms, if the one is a syntactic part of the other, then the meaning of the former is a semantic part of the meaning of the latter.” (Werning 2012c). This principle of semantic constituency describes the correspondence of two part-whole relations which formal broadly conceived definition reads as follows (Werning 2012b): “(…) (Part-whole Relation) A relation defined on a set X is called a part-whole relation on X just in case, for all x,y,zX the following holds: (…)

$$ \begin{array}{rl} (i) & x \sqsubseteq x \; (reflexivity). \\ (ii) & x \sqsubseteq y \wedge y \sqsubseteq x \rightarrow x = y \; (anti-symmetry). \\ (iii) & x \sqsubseteq y \wedge y \sqsubseteq z \rightarrow x \sqsubseteq z \; {(transitivity).}^{\prime\prime} \end{array} $$
(22)

Symbolic semantics: “(…) (Symbolic Semantics) Given a language with the syntax 〈T,Σ T 〉, a thereon defined syntactic part-whole relation \(\sqsubseteq _{T}\), and a meaning function μ:TM, then its semantics 〈M,Σ M 〉 is symbolic if and only if there is a part-whole relation \(\sqsubseteq _{M}\) defined on M such that for all terms s,tT the following holds:

$$ s \sqsubseteq_{T} t \rightarrow \mu(s) \sqsubseteq_{M} \mu{(t).}^{\prime\prime} $$
(23)

(See for details (Werning 2012f))

Endnotes

1 The term “neurocognition” or “neurocognitive” means that cognitive neuroarchitectures are treated that take into account the recent neuroscientific empirical evidence to a large extent, in other words, that have a high degree of neurobiological plausibility.

2 “Fluid” means that the vectorial transformation processes have a very flowing character with very continuous, gradual transitions.

3 “Transient” means that very fast, temporary and highly volatile vectorial information processing take place in the cognitive neuroarchitectures described here.

4 “Vectorial form” means that the computational transformation processes in the cognitive neuroarchitectures consist of (1) vectorial structures, e.g. semantic, syntactic or sensory concepts in the form of vectors or tensors, and (2) functions like vector additions, vector multiplications or tensor products.

References

  • Abeles, M. (1991). Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Auyang, SY. (1998). Foundations of Complex-System Theories. Cambridge: Cambridge University Press.

    Book  MATH  Google Scholar 

  • Barsalou, LW (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–609.

    Google Scholar 

  • Bechtel, W. (2008). Mental Mechanisms: Philosophical Perspectives on Cognitive Neuroscience. London: Routledge.

    Google Scholar 

  • Bechtel, W, & Abrahamsen, AA. (2002). Connectionism and the Mind: Parallel Processing, Dynamics, and Evolution in Networks, 2nd Ed. Oxford: Blackwell Publishers.

    Google Scholar 

  • Bienenstock, E (1995). A model of neocortex. Network: Computation in Neural Systems, 6, 179–224.

    Article  MATH  Google Scholar 

  • Cangelosi, A, & Schlesinger, M. (2015). Developmental Robotics: From Babies to Robots. Cambridge, MA, London: The MIT Press.

    Google Scholar 

  • Chemero, A, & Silberstein, M (2008). After the philosophy of mind: replacing scholasticism with science. Philosophy of Science, 75, 1–27.

    Article  Google Scholar 

  • Churchland, PS. (2002). Brain-Wise: Studies in Neurophilosophy. Cambridge, MA: MIT Press.

    Google Scholar 

  • Churchland, PM. (2007). Neurophilosophy at Work. New York: Cambridge University Press.

    Book  Google Scholar 

  • Clark, A. (2001). Mindware: An Introduction to the Philosophy of Cognitive Science. Oxford: Oxford University Press.

    Google Scholar 

  • Clark, A (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36, 181–204.

    Article  Google Scholar 

  • Coombes, S (2005). Waves, bumps, and patterns in neural field theories. Biological Cybernetics, 91, 93–108.

    MathSciNet  MATH  Google Scholar 

  • Craver, CF. (2007). Explaining the Brain. Mechanisms and the Mosaic Unity of Neuroscience. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Devaney, RL. (1994). An Introduction to Chaotic Dynamical Systems, 2nd Ed., (pp. 201–214). New York: Addison-Wesley.

    Google Scholar 

  • Eliasmith, C (2003). Neural engineering. Unraveling the complexities of neural systems. IEEE Canadian Review, 43, 13.

    Google Scholar 

  • Eliasmith, C. (2013). How to Build a Brain. A Neural Architecture for Biological Cognition. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Eliasmith, C, & Anderson, CH. (2003a). Neural Engineering. Computation, Representation, and Dynamics in Neurobiological Systems. Cambridge, MA: MIT Press.

  • Eliasmith, C, & Anderson, CH. (2003b). Neural Engineering. Computation, Representation, and Dynamics in Neurobiological Systems. Cambridge, MA: MIT Press. 15–19, 30–40, 49–52, 230–31.

  • Eliasmith, C, & Anderson, CH. (2003c). Neural Engineering. Computation, Representation, and Dynamics in Neurobiological Systems, (p. 231). Cambridge, MA: MIT Press.

  • Engel, AK, & Singer, W (2001). Temporal binding and the neural correlates of sensory awareness. Trends in Cognitive Sciences, 5, 16–25.

    Article  Google Scholar 

  • Engel, AK, König, P, Gray, CM, Singer, W (1990). Stimulus-dependent neuronal oscillations in cat visual cortex: inter-columnar interaction as determined by cross-correlation analysis. European Journal of Neuroscience, 2, 588–606.

    Article  Google Scholar 

  • Fodor, JA, & Pylyshyn, ZW (1988). Connectionism and cognitive architecture: a critical analysis. Cognition, 28, 4–50.

    Article  Google Scholar 

  • Fodor, JA, & Pylyshyn, ZW (1988). Connectionism and cognitive architecture: a critical analysis. Cognition, 28(33), 12–13.

    Google Scholar 

  • Franklin, S (2013). LIDA: A systems-level architecture for cognition, emotion, and learning. IEEE Transactions on Autonomous Mental Development, 6, 19–41.

    Article  Google Scholar 

  • Freeman, WJ (1987). Simulation of chaotic EEG patterns with a dynamic model of the olfactory system. Biological Cybernetics, 56, 139–50.

    Article  Google Scholar 

  • Freeman, WJ. (2000a). How Brains Make up their Minds. New York: Columbia University Press.

  • Freeman, WJ. (2000b). Neurodynamics: An Exploration of Mesoscopic Brain Dynamics. London, UK: Springer.

  • Fries, P, Nikolic̀, D, Singer, W (2007). The gamma cycle. Trends in Neurosciences, 30, 309–316.

    Article  Google Scholar 

  • Friston, K (2010). The free-energy principle: a unified brain theory. Nature Reviews Neuroscience, 11, 127–138.

    Article  Google Scholar 

  • Friston, K, & Stephan, KE (2007). Free-energy and the brain. Synthese, 159, 417–58.

    Article  Google Scholar 

  • Fromm, J (2005). Ten questions about emergence. Complexity Digest, 40. doi:http://dx.doi.org/nlin.AO/0509049 arXiv, 2005/09/27.

  • Garson, J (2015). Connectionism. In: Zalta, EN (Ed.) In The Stanford Encyclopedia of Philosophy (Feb 19, 2015 Edition). http://plato.stanford.edu/entries/connectionism/.

  • Glansdorff, P, & Prigogine, I. (1971). Thermodynamic Theory of Structure, Stability and Fluctuations. London: Wiley-Interscience.

    MATH  Google Scholar 

  • Grossberg, S, & Somers, D (1991). Synchronized oscillations during cooperative feature linking in a cortical model of visual perception. Neural Network, 4, 453–66.

    Article  Google Scholar 

  • Hair, JF, Black, W, Babin, B, Anderson, R, Tatham, R. (2010). Multivariate Data Analysis. A Global Perspective, 7th Ed., (p. 523). Upper Saddle River, NJ: Pearson.

    Google Scholar 

  • Haken, H. (2004). Synergetics. Introduction and Advanced Topics. Berlin, Heidelberg: Springer.

    Google Scholar 

  • Hall, AD, & Fagen, RE (1968). Definition of system. In: Buckley, WR (Ed.) In Modern System Research for the Behavioral Scientist. Aldine Publishing Company, Chicago, (pp. 81–92).

    Google Scholar 

  • Hardcastle, VG (1998). The binding problem. In: BECHTEL, W, & GRAHAM, G (Eds.) In A Companion to Cognitive Science. Blackwell Publisher, Malden/MA, Oxford/UK, (pp. 555–65).

    Google Scholar 

  • Hardcastle, VG (1999). On being importantly necessary for consciousness. Consciousness and Cognition, 8, 152–154.

    Article  Google Scholar 

  • Haykin, SS. (2009). Neural Networks and Learning Machines, 3rd Ed. Upper Saddle River/NJ: Pearson.

    Google Scholar 

  • Hebb, DO. (1949). The Organization of Behavior. A Neuropsychological Theory, (p. 62). New York: Wiley-Interscience.

    Google Scholar 

  • Hodges, W (2001). Formal features of compositionality. Journal of Logic, Language and Information, 10, 7–28.

    Article  MathSciNet  MATH  Google Scholar 

  • Hohwy, J. (2013). The Predictive Mind. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Hummel, J (1999). Binding problem. In: Wilson, RA, & Keil, FC (Eds.) In The MIT Encyclopedia of the Cognitive Sciences. The MIT Press, Cambridge/MA, London, (pp. 85–86).

    Google Scholar 

  • Jaeger, H (2002/2013). Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the Echo State Network Approach. GMD Report 159 - German National Research Center for Information Technology. 5th Rev. http://minds.jacobsuniversity.de/sites/default/files/uploads/papers/ESNTutorialRev.pdf.

  • Kaplan, D, & Craver, CF (2011). The explanatory force of dynamical and mathematical models in neuroscience: a mechanistic perspective. Philosophy of Science, 78, 601–627.

    Article  MathSciNet  Google Scholar 

  • Kilpatrick, ZP (2015). Stochastic synchronization of neural activity waves. Physical Review, E, 91, 040701(R).

    Article  MathSciNet  Google Scholar 

  • Kohonen, T (1982). Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43, 60–63.

    Article  MathSciNet  MATH  Google Scholar 

  • Kohonen, T. (2001a). Self-organizing Maps, 3rd Ed, (pp. 105–76). Berlin: Springer.

  • Kohonen, T. (2001b). Self-organizing Maps, 3rd Ed. Berlin: Springer.

  • Levy, SD, & Gayler, R (2008). Vector symbolic architectures: a new building material for artificial general intelligence. In Proceedings of the First Conference on Artificial General Intelligence, University of Memphis: TN, (pp. 414–418).

  • Lipinsky, J, Schneegans, S, Sandamirskaya, Y, Spencer, JP, Schöner, G (2012). A neurobehavioral model of flexible spatial language behaviors. Journal for Experimental Psychology: Learning, Memory and Cognition, 38, 1490–1511.

    Google Scholar 

  • Lukoševičius, M, & Jaeger, H (2009). Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3, 127–149.

    Article  MATH  Google Scholar 

  • Lukoševičius, M, Jaeger, H, Schrauwen, B (2012). Reservoir computing trends. KI – Künstliche Intelligenz, 26, 365–371.

    Article  Google Scholar 

  • Maass, W (2007). Liquid computing. In Proceedings of the Conference CiE’07: Computability in Europe 2007, Lecture Notes in Computer Science. Springer, Berlin, (pp. 507–16).

    Google Scholar 

  • Maass, W, Natschläger, T, Markram, H (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14, 2531–2560.

    Article  MATH  Google Scholar 

  • Maye, A, & Werning, M (2007). Neuronal synchronization: from dynamics feature binding to compositional representations. Chaos and Complexity Letters, 2, 315–25.

    Google Scholar 

  • McClelland, JL, Rumelhart, DE, Hinton, GE (1986). The appeal of parallel distributed processing. In: Rumelhart, DT, & McClelland, JL (Eds.) In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1: Foundations. MIT Press, Cambridge, MA, (pp. 3–44).

    Google Scholar 

  • Nicolis, G, & Prigogine, I. (1977). Self-Organization in Non-Equilibrium Systems. From Dissipative Structures to Order through Fluctuations. New York: Wiley.

    MATH  Google Scholar 

  • Petitot, J (1994). Sémiotiques, 6, 214–16.

    Google Scholar 

  • Piccinini, G, & Craver, CF (2011). Integrating psychology and neuroscience: functional analyses as mechanism sketches. Synthese, 183, 283–311.

    Article  Google Scholar 

  • Plate, TA. (2003a). Holographic Reduced Representations. Distributed Representation for Cognitive Structures. Leland Stanford Junior University, Center for the Study of Language and Information, (pp. 93–144).

  • Plate, TA. (2003b). Holographic Reduced Representations. Distributed Representation for Cognitive Structures. Leland Stanford Junior University, Center for the Study of Language and Information, (pp. 128–38).

  • Plate, TA. (2003c). Holographic Reduced Representations. Distributed Representation for Cognitive Structures. Leland Stanford Junior University, Center for the Study of Language and Information. 96–98, 118–19.

  • Rempis, CW, Hild, M, Pasemann, F (2013). Enhancing the neuro-controller design process for the myon humanoid robot. Technical Report, University of Osnabrück, Germany. https://repositorium.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2013071711000.

  • Rougier, NP, & Detorakis, GI (2013). Advances in Cognitive Neurodynamics (III). In: Yamaguchi, Y (Ed.) In Proceedings of the 3rd International Conference on Cognitive Neurodynamics. Springer, London, (pp. 281–88).

    Google Scholar 

  • Sandstede, B (2007). Evans functions and nonlinear stability of travelling waves in neuronal network models. International Journal of Bifurcation and Chaos, 17, 2693–2704.

    Article  MathSciNet  MATH  Google Scholar 

  • Schlesinger, M (2009). Toward a Unified Theory of Development. In: Spencer, JP, Thomas, MSC, McClelland, JL (Eds.) In Connectionism and Dynamic Systems Theory Re-Considered. Oxford University Press, Oxford, (pp. 182–99).

    Chapter  Google Scholar 

  • Schmidhuber, J (2014). Deep learning in neural networks: an overview. Technical Report IDSIA-03–14. http://arxiv.org/abs/1404.7828.

  • Schrödinger, E. (1944/2012). What is Life? Cambridge: Cambridge University Press.

  • Singer, W (1999a). Binding by neural synchrony. In: Wilson, RA, & Keil, FC (Eds.) In The MIT Encyclopedia of the Cognitive Sciences. The MIT Press, Cambridge, MA, London, (p. 82).

  • Singer, W (1999b). Neuronal synchrony: A versatile code for the definition of relations. Neuron, 24, 49–65.

  • Singer, W (2002). Synchronization, binding and expectancy. In: Arbib, MA (Ed.) In The Handbook of Brain Theory and Neural Networks. 2nd Ed. The MIT Press, Cambridge, MA, London, (pp. 1136–43).

    Google Scholar 

  • Singer, W (2009). Consciousness and neuronal synchronization. In: Laureys, S, & Tononi, G (Eds.) In The Neurology of Consciousness: Cognitive Neuroscience and Neuropathology. Elsevier, Amsterdam, (pp. 43–52).

    Chapter  Google Scholar 

  • Singer, W (2013a). The neuronal correlate of consciousness: unity in time rather than space? In Neurosciences and the Human Person: New Perspectives on Human Activities Pontifical Academy of Sciences. Scripta Varia. www.casinapioiv.va/content/dam/accademia/pdf/sv121/sv121-singer.pdf, Vol. 121, Vatican City.

  • Singer, W (2013b). Cortical dynamics revisited. Trends in Cognitive Sciences, 17, 616–626.

  • Smolensky, P (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences, 11, 1–23.

    Article  Google Scholar 

  • Smolensky, P (1990). Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence, 46, 159–216.

    Article  MathSciNet  MATH  Google Scholar 

  • Smolensky, P, & Legendre, G. (2006a). The Harmonic Mind. From Neural Computation to Optimality-Theoretic Grammar, vol. 1: Cognitive Architecture. Cambridge, MA, London: The MIT Press. 33, 63–121, 145–415.

  • Smolensky, P, & Legendre, G. (2006b). The Harmonic Mind. From Neural Computation to Optimality-Theoretic Grammar, vol. 2: Linguistic and Philosophical Implications, (pp. 503–92). Cambridge, MA, London: The MIT Press.

  • Smolensky, P, & Legendre, G. (2006c). The Harmonic Mind. From Neural Computation to Optimality-Theoretic Grammar, vol. 1: Cognitive Architecture, (pp. 249–56). Cambridge, MA, London: The MIT Press.

  • Sougné, JP (2003). Binding problem. In: Nadel, L (Ed.) In Encyclopedia of Cognitive Science, (Vol. 1. Natur Publishing Group, London, New York and Tokyo, pp. 374–82).

  • Spencer, JP, Thomas, MSC, McClelland, JL (Eds.) (2009). Toward a Unified Theory of Development: Connectionism and Dynamic Systems Theory Re-Considered. Oxford: Oxford University Press.

    Google Scholar 

  • Sporns, O. (2011). Networks of the Brain. Cambridge, MA, London: The MIT Press.

    MATH  Google Scholar 

  • Stewart, TC, & Eliasmith, C (2012a). Compositionality and biologically plausible models. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (pp. 610–14).

  • Stewart, TC, & Eliasmith, C (2012b). Compositionality and biologically plausible models. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (p. 615).

  • Stewart, TC, & Eliasmith, C (2012c). Compositionality and biologically plausible models. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (pp. 596–98, 615).

  • Stewart, TC, & Eliasmith, C (2012d). Compositionality and biologically plausible models. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (p. 612).

  • Stewart, TC, & Eliasmith, C (2012e). Compositionality and biologically plausible models. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (pp. 610–613).

  • Stewart, TC, & Eliasmith, C (2012f). Compositionality and biologically plausible models. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (pp. 613–614).

  • Treisman, A (1996). The binding problem. Currunt Opinion of Neurobiology, 6, 171–78.

    Article  Google Scholar 

  • Troy, WC (2008a). Wave phenomena in neuronal networks. In: Akhmediev, N, & Ankiewicz, A (Eds.) In Dissipative Solitons. From Optics to Biology and Medicine. Springer, Berlin, (pp. 431–452).

  • Troy, WC (2008b). Traveling waves and synchrony in an excitable large-scale neuronal network with asymmetric connections. SIAM Journal on Applied Dynamical Systems, 7, 1247–1282.

  • van Gelder, T, & Port, RF (1995). In: Port, RF, & van Gelder, T (Eds.) In Mind as Motion. Explorations in the Dynamics of Cognition. The MIT Press, 1–43, Cambridge, MA, London.

  • von Bertalanffy, L (1950). The theory of open systems in physics and biology. Science, 111, 23–29.

    Article  Google Scholar 

  • von Bertalanffy, L. (1953). Biophysik des Fließgleichgewichts. Einführung in die Physik offener Systeme und ihre Anwendung in der Biologie. Braunschweig: Verlag Friedrich Vieweg und Sohn.

    Google Scholar 

  • von der Malsburg, C (1981/1994). The correlation theory of brain function. Internal Report 81–2, Department of Neurobiology, Max-Planck-Institute for Biophysical Chemistry, Göttingen. Reprinted In: Domany, F, Van Hemmen, JL, Schulten, K (Eds.) In Models of Neural Networks II. Temporal Aspects of Coding and Information Processing in Biological Systems. Ch. 2. Springer, New York, (pp. 95–119).

  • von der Malsburg, C (1999). The what and why of binding: the modeler’s perspective. Neuron, 24, 95–104.

    Article  Google Scholar 

  • von der Malsburg, C (2001). Binding problem, neural basis of. In: Smelser, NJ, & Baltes, PB (Eds.) In International Encyclopedia of the Social and Behavioral Sciences, (Vol. vol. 15. Elsevier Science, Oxford, pp. 1178–80).

    Chapter  Google Scholar 

  • Werning, M (2001). How to solve the problem of compositionality by oscillatory network. In: Moore, JD, & Stenning, KE (Eds.) In Proceedings of the Twenty-Third Annual Conference of the Cognitive Science Society. London: Lawrence Erlbaum Associates, (pp. 1094–99).

  • Werning, M (2005a). The Compositionality of Meaning and Content. In: Werning, M, Machery, E, Schurz, G (Eds.) In Applications to Linguistics, Psychology and Neuroscience, Vol. vol. II. Ontos Verlag, 283–312, Frankfurt am Main.

  • Werning, M (2005b). The Compositionality of Meaning and Content. In: Werning, M, Machery, E, Schurz, G (Eds.) In Applications to Linguistics, Psychology and Neuroscience, (Vol. vol. II. Ontos Verlag, Frankfurt am Main, pp. 291–294).

  • Werning, M (2012a). Non-symbolic compositional representation and its neuronal foundation: towards an emulative semantics. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (pp. 633–54).

  • Werning, M (2012b). Non-symbolic compositional representation and its neuronal foundation: towards an emulative semantics. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (p. 635).

  • Werning, M (2012c). Non-symbolic compositional representation and its neuronal foundation: towards an emulative semantics. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (p. 634).

  • Werning, M (2012d). Non-symbolic compositional representation and its neuronal foundation: towards an emulative semantics. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (pp. 636–637).

  • Werning, M (2012e). Non-symbolic compositional representation and its neuronal foundation: towards an emulative semantics. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (p. 637).

  • Werning, M (2012f). Non-symbolic compositional representation and its neuronal foundation: towards an emulative semantics. In: Werning, M, Hinzen, W, Machery, E (Eds.) In The Oxford Handbook of Compositionality. Oxford University Press, Oxford, (p. 636).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harald Maurer.

Additional information

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Maurer, H. Integrative synchronization mechanisms in connectionist cognitive neuroarchitectures. Comput Cogn Sci 2, 3 (2016). https://doi.org/10.1186/s40469-016-0010-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40469-016-0010-8

Keywords