Permalink
Switch branches/tags
Nothing to show
Find file
Fetching contributors…
Cannot retrieve contributors at this time
1 lines (1 sloc) 146 KB
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.1d1 20130915//EN" "JATS-archivearticle1.dtd"><article article-type="research-article" dtd-version="1.1d1" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="nlm-ta">elife</journal-id><journal-id journal-id-type="hwp">eLife</journal-id><journal-id journal-id-type="publisher-id">eLife</journal-id><journal-title-group><journal-title>eLife</journal-title></journal-title-group><issn publication-format="electronic">2050-084X</issn><publisher><publisher-name>eLife Sciences Publications, Ltd</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">01239</article-id><article-id pub-id-type="doi">10.7554/eLife.01239</article-id><article-categories><subj-group subj-group-type="display-channel"><subject>Research article</subject></subj-group><subj-group subj-group-type="heading"><subject>Neuroscience</subject></subj-group></article-categories><title-group><article-title>A diversity of localized timescales in network activity</article-title></title-group><contrib-group><contrib contrib-type="author" id="author-6684"><name><surname>Chaudhuri</surname><given-names>Rishidev</given-names></name><xref ref-type="aff" rid="aff1"/><xref ref-type="aff" rid="aff2"/><xref ref-type="fn" rid="con1"/><xref ref-type="fn" rid="conf1"/></contrib><contrib contrib-type="author" id="author-6685"><name><surname>Bernacchia</surname><given-names>Alberto</given-names></name><xref ref-type="aff" rid="aff3"/><xref ref-type="fn" rid="con2"/><xref ref-type="fn" rid="conf1"/></contrib><contrib contrib-type="author" corresp="yes" id="author-6345"><name><surname>Wang</surname><given-names>Xiao-Jing</given-names></name><xref ref-type="aff" rid="aff2"/><xref ref-type="aff" rid="aff4"/><xref ref-type="corresp" rid="cor1">*</xref><xref ref-type="other" rid="par-1"/><xref ref-type="other" rid="par-2"/><xref ref-type="fn" rid="con3"/><xref ref-type="fn" rid="conf1"/></contrib><aff id="aff1"><institution content-type="dept">Department of Applied Mathematics</institution>, <institution>Yale University</institution>, <addr-line><named-content content-type="city">New Haven</named-content></addr-line>, <country>United States</country></aff><aff id="aff2"><institution content-type="dept">Department of Neurobiology</institution>, <institution>Yale University</institution>, <addr-line><named-content content-type="city">New Haven</named-content></addr-line>, <country>United States</country></aff><aff id="aff3"><institution content-type="dept">School of Engineering and Science</institution>, <institution>Jacobs University Bremen</institution>, <addr-line><named-content content-type="city">Bremen</named-content></addr-line>, <country>Germany</country></aff><aff id="aff4"><institution content-type="dept">Center for Neural Science</institution>, <institution>New York University</institution>, <addr-line><named-content content-type="city">New York</named-content></addr-line>, <country>United States</country></aff></contrib-group><contrib-group content-type="section"><contrib contrib-type="editor"><name><surname>Tsodyks</surname><given-names>Misha</given-names></name><role>Reviewing editor</role><aff><institution>Weizmann Institute of Science</institution>, <country>Israel</country></aff></contrib></contrib-group><author-notes><corresp id="cor1"><label>*</label>For correspondence: <email>xjwang@nyu.edu</email></corresp></author-notes><pub-date date-type="pub" publication-format="electronic"><day>21</day><month>01</month><year>2014</year></pub-date><pub-date pub-type="collection"><year>2014</year></pub-date><volume>3</volume><elocation-id>e01239</elocation-id><history><date date-type="received"><day>16</day><month>07</month><year>2013</year></date><date date-type="accepted"><day>04</day><month>12</month><year>2013</year></date></history><permissions><copyright-statement>© 2013, Chaudhuri et al</copyright-statement><copyright-year>2013</copyright-year><copyright-holder>Chaudhuri et al</copyright-holder><license xlink:href="http://creativecommons.org/licenses/by/3.0/"><license-p>This article is distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/">Creative Commons Attribution License</ext-link>, which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p></license></permissions><self-uri content-type="pdf" xlink:href="elife01239.pdf"/><abstract><object-id pub-id-type="doi">10.7554/eLife.01239.001</object-id><p>Neurons show diverse timescales, so that different parts of a network respond with disparate temporal dynamics. Such diversity is observed both when comparing timescales across brain areas and among cells within local populations; the underlying circuit mechanism remains unknown. We examine conditions under which spatially local connectivity can produce such diverse temporal behavior.</p><p>In a linear network, timescales are segregated if the eigenvectors of the connectivity matrix are localized to different parts of the network. We develop a framework to predict the shapes of localized eigenvectors. Notably, local connectivity alone is insufficient for separate timescales. However, localization of timescales can be realized by heterogeneity in the connectivity profile, and we demonstrate two classes of network architecture that allow such localization. Our results suggest a framework to relate structural heterogeneity to functional diversity and, beyond neural dynamics, are generally applicable to the relationship between structure and dynamics in biological networks.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.001">http://dx.doi.org/10.7554/eLife.01239.001</ext-link></p></abstract><abstract abstract-type="executive-summary"><object-id pub-id-type="doi">10.7554/eLife.01239.002</object-id><title>eLife digest</title><p>Many biological systems can be thought of as networks in which a large number of elements, called ‘nodes’, are connected to each other. The brain, for example, is a network of interconnected neurons, and the changing activity patterns of this network underlie our experience of the world around us. Within the brain, different parts can process information at different speeds: sensory areas of the brain respond rapidly to the current environment, while the cognitive areas of the brain, involved in complex thought processes, are able to gather information over longer periods of time. However, it has been largely unknown what properties of a network allow different regions to process information over different timescales, and how variations in structural properties translate into differences in the timescales over which parts of a network can operate.</p><p>Now Chaudhuri et al. have addressed these issues using a simple but ubiquitous class of networks called linear networks. The activity of a linear network can be broken down into simpler patterns called eigenvectors that can be combined to predict the responses of the whole network. If these eigenvectors ‘map’ to different parts of the network, this could explain how distinct regions process information on different timescales.</p><p>Chaudhuri et al. developed a mathematical theory to predict what properties would cause such eigenvectors to be separated from each other and applied it to networks with architectures that resemble the wiring of the brain. This revealed that gradients in the connectivity across the network, such that nodes share more properties with neighboring nodes than distant nodes, combined with random differences in the strength of inter-node connections, are general motifs that give rise to such separated activity patterns. Intriguingly, such gradients and randomness are both common features of biological systems.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.002">http://dx.doi.org/10.7554/eLife.01239.002</ext-link></p></abstract><kwd-group kwd-group-type="author-keywords"><title>Author keywords</title><kwd>timescales</kwd><kwd>network dynamics</kwd><kwd>neural networks</kwd></kwd-group><kwd-group kwd-group-type="research-organism"><title>Research organism</title><kwd>None</kwd></kwd-group><funding-group><award-group id="par-1"><funding-source><institution-wrap><institution>Office of Naval Research</institution></institution-wrap></funding-source><award-id>N00014-13-1-0297</award-id><principal-award-recipient><name><surname>Wang</surname><given-names>Xiao-Jing</given-names></name></principal-award-recipient></award-group><award-group id="par-2"><funding-source><institution-wrap><institution>John Simon Guggenheim Memorial Foundation Fellowship</institution></institution-wrap></funding-source><principal-award-recipient><name><surname>Wang</surname><given-names>Xiao-Jing</given-names></name></principal-award-recipient></award-group><funding-statement>The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.</funding-statement></funding-group><custom-meta-group><custom-meta><meta-name>elife-xml-version</meta-name><meta-value>2</meta-value></custom-meta><custom-meta specific-use="meta-only"><meta-name>Author impact statement</meta-name><meta-value>Specific types of heterogeneity in the connectivity profile of a biological network give rise to dynamics with a hierarchy of localized temporal scales.</meta-value></custom-meta></custom-meta-group></article-meta></front><body><sec id="s1" sec-type="intro"><title>Introduction</title><p>A major challenge in the study of neural circuits, and complex networks more generally, is understanding the relationship between network structure and patterns of activity or possible functions this structure can subserve (<xref ref-type="bibr" rid="bib39">Strogatz, 2001</xref>; <xref ref-type="bibr" rid="bib27">Newman, 2003</xref>; <xref ref-type="bibr" rid="bib18">Honey et al., 2010</xref>; <xref ref-type="bibr" rid="bib37">Sporns, 2011</xref>). A number of neural networks show a diversity of time constants, namely different nodes (single neurons or local neural groups) in the network display dynamical activity that changes on different timescales. For instance, in the mammalian brain, long integrative timescales of neurons in the frontal cortex (<xref ref-type="bibr" rid="bib33">Romo et al., 1999</xref>; <xref ref-type="bibr" rid="bib45">Wang, 2001</xref>; <xref ref-type="bibr" rid="bib47">Wang, 2010</xref>) are in striking contrast with rapid transient responses of neurons in a primary sensory area (<xref ref-type="bibr" rid="bib6">Benucci et al., 2009</xref>). Furthermore, even within a local circuit, a diversity of timescales may coexist across a heterogeneous neural population. Notable recent examples include the timescales of reward integration in the macaque cortex (<xref ref-type="bibr" rid="bib7">Bernacchia et al., 2011</xref>), and the decay of neural firing rates in the zebrafish (<xref ref-type="bibr" rid="bib25">Miri et al., 2011</xref>) and macaque oculomotor integrators (<xref ref-type="bibr" rid="bib19">Joshua et al., 2013</xref>). While several models have been proposed, general structural principles that enable a network to show a diversity of timescales are lacking.</p><p>Studies of the cortex have revealed that neural connectivity decays rapidly with distance (<xref ref-type="bibr" rid="bib17">Holmgren et al., 2003</xref>; <xref ref-type="bibr" rid="bib24">Markov et al., 2011</xref>; <xref ref-type="bibr" rid="bib30">Perin et al., 2011</xref>; <xref ref-type="bibr" rid="bib22">Levy and Reyes, 2012</xref>; <xref ref-type="bibr" rid="bib23">Markov et al., 2014</xref>; <xref ref-type="bibr" rid="bib12">Ercsey-Ravasz et al., 2013</xref>) as does the magnitude of correlations in neural activity (<xref ref-type="bibr" rid="bib9">Constantinidis and Goldman-Rakic, 2002</xref>; <xref ref-type="bibr" rid="bib36">Smith and Kohn, 2008</xref>; <xref ref-type="bibr" rid="bib20">Komiyama et al., 2010</xref>). This characteristic is apparent on multiple scales: in the cerebral cortex of the macaque monkey, both the number of connections between neurons in a given area and those between neurons across different brain areas decay rapidly with distance (<xref ref-type="bibr" rid="bib24">Markov et al., 2011</xref>, <xref ref-type="bibr" rid="bib23">2014</xref>). Intuitively, local connectivity may suggest that the timescales of network activity are localized, by which we mean that nodes that respond with a certain timescale are contained within a particular region of the network. Such a network would show patterns of activity with different temporal dynamics in disparate regions. Surprisingly, this is not always true and, as we show, additional conditions are required for localized structure to translate into localized temporal dynamics.</p><p>We study this structure–function relationship for linear networks of interacting nodes. Linear networks are used to model a variety of physical and biological networks, especially those where inter-node interactions are weighted (<xref ref-type="bibr" rid="bib28">Newman, 2010</xref>). Most dynamical systems can be linearized around a point of interest, and so linear networks generically emerge when studying the response of nonlinear networks to small perturbations (<xref ref-type="bibr" rid="bib38">Strogatz, 1994</xref>; <xref ref-type="bibr" rid="bib28">Newman, 2010</xref>). Moreover, for many neurons the dependence of firing rate on input is approximately threshold-linear over a wide range (<xref ref-type="bibr" rid="bib2">Ahmed et al., 1998</xref>; <xref ref-type="bibr" rid="bib13">Ermentrout, 1998</xref>; <xref ref-type="bibr" rid="bib44">Wang, 1998</xref>; <xref ref-type="bibr" rid="bib8">Chance et al., 2002</xref>), and linear networks are common models for the dynamics of neural circuits (<xref ref-type="bibr" rid="bib10">Dayan and Abbott, 2001</xref>; <xref ref-type="bibr" rid="bib35">Shriki et al., 2003</xref>; <xref ref-type="bibr" rid="bib43">Vogels et al., 2005</xref>; <xref ref-type="bibr" rid="bib31">Rajan and Abbott, 2006</xref>; <xref ref-type="bibr" rid="bib14">Ganguli et al., 2008</xref>; <xref ref-type="bibr" rid="bib15">Ganguli et al., 2008</xref>; <xref ref-type="bibr" rid="bib26">Murphy and Miller, 2009</xref>; <xref ref-type="bibr" rid="bib25">Miri et al., 2011</xref>).</p><p>The activity of a linear network is determined by a set of characteristic patterns, called eigenvectors (<xref ref-type="bibr" rid="bib34">Rugh, 1995</xref>). Each eigenvector specifies the relative activation of the various nodes. For example, in one eigenvector the first node could show twice as much activity as the second node and four times as much activity as the third node, and so on. The activity of the network is the weighted sum of contributions from the eigenvectors. The weight (or amplitude) of each eigenvector changes over time with a timescale determined by the eigenvalue corresponding to the eigenvector. The network architecture determines the eigenvectors and eigenvalues, while the input sets the amplitudes with which the various eigenvectors are activated. In <xref ref-type="fig" rid="fig1">Figure 1</xref>, we illustrate this decomposition in a simple schematic network with three eigenvectors whose amplitudes change on a fast, intermediate and slow timescale respectively.<fig id="fig1" position="float"><object-id pub-id-type="doi">10.7554/eLife.01239.003</object-id><label>Figure 1.</label><caption><title>The activity of a linear network can be decomposed into contributions from a set of eigenvectors.</title><p>On the right is shown a sample network along with the activity of two nodes (cyan and yellow). The activity of this network is the combination of a set of eigenvectors whose spatial distributions are shown in blue, green and red on the left. The nodes are colored according to the contributions of the various eigenvectors. Each eigenvector has an amplitude that varies in time with a single timescale given by the corresponding eigenvalue; here the blue, green and red eigenvectors have a fast, intermediate and slow timescale, respectively. The cyan node is primarily a combination of the blue and green eigenvectors; hence its activity is dominated by a combination of the blue and green amplitudes and it shows a fast and an intermediate timescale. Similarly, the yellow node has large components in the green and red eigenvectors, therefore its activity reflects the corresponding amplitudes and intermediate and slow timescales.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.003">http://dx.doi.org/10.7554/eLife.01239.003</ext-link></p></caption><graphic xlink:href="elife01239f001"/></fig></p><p>In general, the eigenvectors are poorly segregated from each other: each node participates significantly in multiple eigenvectors and each eigenvector is spread out across multiple nodes (<xref ref-type="bibr" rid="bib41">Trefethen and Embree, 2005</xref>). Consequently, timescales are not segregated, and a large number of timescales are shared across nodes. Furthermore, if the timescales have largely different values, certain eigenvectors are more persistent than others and dominate the nodes at which they are present. If these slow timescales are spread across multiple nodes, they dominate the network activity and the nodes will show very similar temporal dynamics. This further limits the diversity of network computation.</p><p>In this paper, we begin by observing that rapidly-decaying connectivity by itself is insufficient to give rise to localized eigenvectors. We then examine conditions on the network-coupling matrix that allow localized eigenvectors to emerge and build a framework to calculate their shapes. We illustrate our methods with simple examples of neural dynamics. Our examples are drawn from Neuroscience, but our results should be more broadly applicable for understanding network dynamics and the relationship between the structure and function of complex systems.</p></sec><sec id="s2" sec-type="results"><title>Results</title><p>We study linear neural networks endowed with a connection matrix <italic>W</italic> (<italic>j,k</italic>) (‘Methods’, <xref ref-type="disp-formula" rid="equ10">Equation 9</xref>), which denotes the weight of connection from node <italic>k</italic> to node <italic>j</italic>. For a network with <italic>N</italic> nodes, the matrix <italic>W</italic> has <italic>N</italic> eigenvectors and <italic>N</italic> corresponding eigenvalues. The time constant associated with the eigenvector <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> is <inline-formula><mml:math id="inf1"><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mi mathvariant="fraktur">R</mml:mi><mml:mi mathvariant="fraktur">e</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>−</mml:mo><mml:mi>λ</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>, where <italic>λ</italic> is the corresponding eigenvalue (‘Methods’, <xref ref-type="disp-formula" rid="equ12">Equation 11</xref>). This time constant is present at all nodes where the eigenvector has non-zero magnitude. We say an eigenvector is delocalized if its components are significantly different from 0 for most nodes. In this case, the corresponding timescale is spread across the entire network. On the other hand, if an eigenvector is localized then <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> (<italic>j</italic>) ≈ 0 except for a restricted subset of spatially contiguous nodes, and the timescale <inline-formula><mml:math id="inf2"><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mi mathvariant="fraktur">R</mml:mi><mml:mi mathvariant="fraktur">e</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>−</mml:mo><mml:mi>λ</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula> is confined to a region of the network. If most or all of the eigenvectors are localized, then different nodes show separated timescales in their dynamical response to external stimulation.</p><p>Note that even if the eigenvectors are localized, a large proportion of network nodes could respond to a given input, but they would do so with disparate temporal dynamics. Conversely, even if the eigenvectors are delocalized, a given input could still drive some nodes much more strongly than others. However, the temporal dynamics of the response will be very similar at the various nodes even if the magnitudes are different.</p><p>Consider a network with nodes arranged in a ring, as shown in the top panel of <xref ref-type="fig" rid="fig2">Figure 2A</xref>. The connection strength between nodes decays with distance according to<disp-formula id="equ1"><mml:math id="m1"><mml:mrow><mml:mi>W</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>where, <italic>l</italic><sub><italic>c</italic></sub> is set to be 1 node so that the connectivity is sharply localized spatially. In <xref ref-type="fig" rid="fig2">Figure 2B</xref> we plot the absolute values and real parts of three sample eigenvectors. The behavior is typical of all eigenvectors: despite the local connectivity they are maximally delocalized and each node contributes with the same relative weight to each eigenvector (its absolute value is constant, while its real and imaginary parts oscillate across the network). As shown in <xref ref-type="fig" rid="fig2">Figure 2C</xref>, the timescales of decay are very similar across nodes.<fig id="fig2" position="float"><object-id pub-id-type="doi">10.7554/eLife.01239.004</object-id><label>Figure 2.</label><caption><title>Local connectivity is insufficient to yield localized eigenvectors.</title><p>(<bold>A</bold>) The network consists of 100 nodes, arranged in a ring. Connection strength decays exponentially with distance, with characteristic length of one node, and is sharply localized. The network topology is shown here as a schematic, with six nodes and only nearest-neighbor connections. (<bold>B</bold>) The eigenvectors are maximally delocalized. Three eigenvectors are shown, and the others are similar. The absolute value of each eigenvector, shown with the gray dashed lines, is the same at all nodes. The real part of each eigenvector, shown in color, oscillates with a different frequency for each eigenvector. (<bold>C</bold>) Dynamical response of the network to an input pulse, shown on a logarithmic scale. All nodes show similar response timescales.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.004">http://dx.doi.org/10.7554/eLife.01239.004</ext-link></p></caption><graphic xlink:href="elife01239f002"/></fig></p><p>As known from the theory of discrete Fourier transforms, such delocalized eigenvectors are generically seen if the connectivity is translationally invariant, meaning that the connectivity profile is the same around each node (see mathematical appendix [<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>], Section 1 or standard references on linear algebra or solid-state physics [<xref ref-type="bibr" rid="bib4">Ashcroft and Mermin, 1976</xref>]). In this case the <italic>j</italic>th component of the eigenvector <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> is<disp-formula id="equ2"><label>(1)</label><mml:math id="m2"><mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">v</mml:mi><mml:mi mathvariant="bold-italic">λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>where, ω/2π is the oscillation frequency (which depends on <italic>λ</italic>) and <italic>i</italic> is the imaginary unit (<italic>i</italic><sup><italic>2</italic></sup> = −1). Thus local connectivity is insufficient to produce localized eigenvectors.</p><p>We developed a theoretical approach that enables us to test network architectures that yield localized eigenvectors. Although in general it is not possible to analytically calculate all timescales (eigenvalues) of a generic matrix, the theory allows us to predict which timescales would be localized and which would be shared. For the localized timescales, it yields a functional form for the shape of the corresponding localized eigenvectors. Finally, the theory shows how changing network parameters promotes or hinders localization. For a further discussion of these issues, see Section 2 of the mathematical appendix (<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>).</p><p>For a given local connectivity, <italic>W</italic> (<italic>j</italic>,<italic>k</italic>), we postulate the existence of an eigenvector <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> that is well localized around some position, <italic>j</italic><sub>0</sub>, defined as its center. We then solve for the detailed shape (functional form) of our putative eigenvector and test whether this shape is consistent with our prior assumption on <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub>. If so, this is a valid solution for a localized eigenvector.</p><p>Specifically, if <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> is localized around <italic>j</italic><sub><italic>0</italic></sub> then <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> (<italic>k</italic>) is small when <inline-formula><mml:math id="inf3"><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> is large. We combine this with the requirement of local connectivity, which implies that <italic>W</italic> (<italic>j</italic>,<italic>k</italic>) is small when <inline-formula><mml:math id="inf4"><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> is large, and expand <italic>W</italic> and <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> to first-order in <inline-formula><mml:math id="inf5"><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="inf6"><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> respectively. With this approximation, we solve for <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> across all nodes and find (‘Methods’ and mathematical appendix [<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>], Section 2)<disp-formula id="equ3"><label>(2)</label><mml:math id="m3"><mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">v</mml:mi><mml:mi mathvariant="bold-italic">λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>α</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>ω</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac><mml:mo>+</mml:mo><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p><p>The eigenvector is a modulated Gaussian function, centered at <italic>j</italic><sub>0</sub>. The characteristic width is <italic>α</italic>, such that a small <italic>α</italic> corresponds to a sharply localized eigenvector. Note that <italic>j</italic><sub><italic>0</italic></sub> and <italic>ω</italic> depend on the particular timescale (or eigenvalue, <italic>λ</italic>) being considered and hence, in general, <italic>α</italic><sup>2</sup> will depend on the timescale under consideration. For <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> to be localized, the real part of <italic>α</italic><sup>2</sup> must be positive when evaluated at the corresponding timescale. In this case, <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> is consistent with our prior assumption, and we accept it as a meaningful solution.</p><p>Our theory gives the dependence of the eigenvector width on network parameters and on the corresponding timescale. In particular, <italic>α</italic> depends inversely on the degree of local heterogeneity in the network, so that greater heterogeneity leads to more tightly localized eigenvectors (see appendix [<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>], Section 2). <italic>ω</italic> is a frequency term that allows <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> to oscillate across nodes, as in <xref ref-type="disp-formula" rid="equ2">Equation 1</xref>. As shown later, the method is general and a second-order expansion can be used when the first-order expansion breaks down. In that case the eigenvector shape is no longer Gaussian.</p><p>We now apply this theory to models of neural dynamics in the mammalian cerebral cortex. We use connectivity that decays exponentially with distance (<xref ref-type="bibr" rid="bib24">Markov et al., 2011</xref>, <xref ref-type="bibr" rid="bib23">2014</xref>; <xref ref-type="bibr" rid="bib12">Ercsey-Ravasz et al., 2013</xref>) but our analysis applies to other forms of local connectivity.</p><sec id="s2-1"><title>Localization in a network with a gradient of local connectivity</title><p>Our first model architecture is motivated by observations that as one progresses from sensory to prefrontal areas in the primate brain, neurons receive an increasing number of excitatory connections from their neighbors (<xref ref-type="bibr" rid="bib45">Wang, 2001</xref>; <xref ref-type="bibr" rid="bib11">Elston, 2007</xref>; <xref ref-type="bibr" rid="bib46">Wang, 2008</xref>). We model a chain of nodes (i.e., neurons, networks of neurons or cortical areas) with connectivity that decays exponentially with distance. In addition, we introduce a gradient of excitatory self-couplings along the chain to account for the increase in local excitation.</p><p>The network is shown in <xref ref-type="fig" rid="fig3">Figure 3A</xref> and the coupling matrix <italic>W</italic> is given by<disp-formula id="equ4"><label>(3)</label><mml:math id="m4"><mml:mrow><mml:mi>W</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mtext>Δ</mml:mtext><mml:mi>r</mml:mi></mml:msub><mml:mi>j</mml:mi></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>for</mml:mtext><mml:mo> </mml:mo><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mi>k</mml:mi><mml:mo> </mml:mo><mml:mo> </mml:mo><mml:mo>(</mml:mo><mml:mtext>self</mml:mtext><mml:mo>-</mml:mo><mml:mtext>coupling</mml:mtext><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mi>f</mml:mi></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>for</mml:mtext><mml:mo> </mml:mo><mml:mi>j</mml:mi><mml:mo>&gt;</mml:mo><mml:mi>k</mml:mi><mml:mo> </mml:mo><mml:mo> </mml:mo><mml:mo>(</mml:mo><mml:mtext>feedforward</mml:mtext><mml:mo> </mml:mo><mml:mtext>connections</mml:mtext><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>for</mml:mtext><mml:mo> </mml:mo><mml:mi>j</mml:mi><mml:mo>&lt;</mml:mo><mml:mi>k</mml:mi><mml:mo> </mml:mo><mml:mo> </mml:mo><mml:mo>(</mml:mo><mml:mtext>feedback</mml:mtext><mml:mo> </mml:mo><mml:mtext>connections</mml:mtext><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula><fig-group><fig id="fig3" position="float"><object-id pub-id-type="doi">10.7554/eLife.01239.005</object-id><label>Figure 3.</label><caption><title>Localized eigenvectors in a network with a gradient of local connectivity.</title><p>(<bold>A</bold>) The network is a chain of 100 nodes. Network topology is shown as a schematic with a subset of nodes and only nearest-neighbor connections. The plot above the chain shows the connectivity profile, highlighting the exponential decay and the asymmetry between feedforward and feedback connections. Self-coupling increases along the chain, as shown by the grayscale gradient. (<bold>B</bold>) Sample eigenvectors (filled circles) in a network with a weak gradient of self-coupling, so that localized and delocalized eigenvectors coexist. Localized eigenvectors are described by Gaussians, and predictions from <xref ref-type="disp-formula" rid="equ5">Equation 4</xref> are shown as solid lines. Eigenvectors are normalized by maximum value. The network is described by <xref ref-type="disp-formula" rid="equ4">Equation 3</xref>, with <italic>μ</italic><sub>0</sub> = −1.9, Δ<sub><italic>r</italic></sub> = 0.0015, <italic>μ</italic><sub><italic>f</italic></sub> = 0.2, <italic>μ</italic><sub><italic>b</italic></sub>= 0.1 and <italic>l</italic><sub><italic>c</italic></sub> = 4. (<bold>C</bold>) Sample eigenvectors (filled circles) along with predictions (solid lines) in a network with a strong gradient, so that all eigenvectors are localized. Network parameters are the same as <bold>B</bold>, except Δ<sub><italic>r</italic></sub> = 0.01. (<bold>D</bold>) Heat map of eigenvectors from network in (<bold>C</bold>) on logarithmic scale. Eigenvectors are along rows, arranged by increasing decay time. All are localized, and eigenvectors with longer timescales are localized further down in the chain. Edge effects cause the Gaussian shape to break down at the end of the chain, but eigenvectors are still localized at the boundary. (<bold>E</bold>) Dynamical response of the network in (<bold>C</bold>) to an input pulse. Nodes early in the chain show responses that decay away rapidly, while those further in the chain show more persistent responses.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.005">http://dx.doi.org/10.7554/eLife.01239.005</ext-link></p></caption><graphic xlink:href="elife01239f003"/></fig><fig id="fig3s1" position="float" specific-use="child-fig"><object-id pub-id-type="doi">10.7554/eLife.01239.006</object-id><label>Figure 3—figure supplement 1.</label><caption><title>Co-existence of localized and delocalized eigenvectors in a network with a weak gradient of local connectivity.</title><p>(<bold>A</bold>) Left panel: eigenvalues of the network (filled circles) along with the region of the complex plane in which <italic>α</italic><sup>2</sup> &gt; 0 (gray shaded region). Eigenvectors corresponding to eigenvalues within this region are predicted to be localized. (<bold>B</bold>) Eigenvectors corresponding to the colored eigenvalues in panel <bold>A</bold>. Eigenvalues within the gray region correspond to localized eigenvectors. Eigenvectors outside the gray region are progressively more delocalized. Eigenvectors are shown as solid lines for ease of visualization. (<bold>C</bold>) Heat map of eigenvectors on logarithmic scale. Eigenvectors are along rows, arranged by increasing decay time. The network is described by <xref ref-type="disp-formula" rid="equ4">Equation 3</xref> in the main text, with <italic>μ</italic><sub>0</sub> = − 1.9, Δ<sub><italic>r</italic></sub> = 0.0015, <italic>μ</italic><sub><italic>f</italic></sub> = 0.2, <italic>μ</italic><sub><italic>b</italic></sub> = 0.1, and <italic>l</italic><sub><italic>c</italic></sub> = 4.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.006">http://dx.doi.org/10.7554/eLife.01239.006</ext-link></p></caption><graphic xlink:href="elife01239fs001"/></fig></fig-group></p><p>The self-coupling includes a leakage term (<italic>μ</italic><sub>0</sub> &lt; 0) and a recurrent excitation term that increases along the chain with a slope Δ<sub><italic>r</italic></sub>. Nodes higher in the network thus have stronger self-coupling. Connection strengths have a decay length <italic>l</italic><sub><italic>c</italic></sub>. <italic>μ</italic><sub><italic>f</italic></sub> scales the overall strength of feedforward connections (i.e., connections from early to late nodes in the chain) while <italic>μ</italic><sub><italic>b</italic></sub> scales the strength of feedback connections. In general we set <italic>μ</italic><sub><italic>f</italic></sub> &gt; <italic>μ</italic><sub><italic>b</italic></sub>.</p><p>If the gradient of self-coupling (Δ<sub><italic>r</italic></sub>) is strong enough, some of the eigenvectors of the network will be localized. As the gradient becomes steeper this region of localization expands. Our theory predicts which eigenvectors will be localized and how this region expands as the gradient becomes steeper (<xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>).</p><p>By applying the theory sketched in the previous section (and developed in detail in the appendix [<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>]), we find that the value of the eigenvector width for the localized eigenvectors (<italic>α</italic> in <xref ref-type="disp-formula" rid="equ3">Equation 2</xref>) is equal to (see Section 3 of <xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>)<disp-formula id="equ5"><label>(4)</label><mml:math id="m5"><mml:mrow><mml:msup><mml:mi>α</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mi>f</mml:mi></mml:msub><mml:mo>−</mml:mo><mml:msub><mml:mi>μ</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:msub><mml:mtext>Δ</mml:mtext><mml:mi>r</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mtext>cosh</mml:mtext><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>l</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p><p>This equation asserts that <italic>α</italic><sup>2</sup> is inversely proportional to the gradient of local connectivity, Δ<sub><italic>r</italic></sub>, so that a steeper gradient leads to sharper localization, and <italic>α</italic><sup>2</sup> increases with increasing connectivity decay length, <italic>l</italic><sub><italic>c</italic></sub>. Note that in this case the eigenvector width is independent of the location of the eigenvector (or the particular timescale).</p><p>In <xref ref-type="fig" rid="fig3">Figure 3B</xref>, we plot sample eigenvectors for a network with a weak gradient, where localized and delocalized eigenvectors coexist. We also plot the analytical prediction for the localized eigenvectors, which fits well with the numerical simulation results. For more details on this network see <xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>. In <xref ref-type="fig" rid="fig3">Figure 3C</xref>, we plot sample eigenvectors for a network with a strong enough gradient that all eigenvectors are localized. As shown in <xref ref-type="fig" rid="fig3">Figure 3D</xref>, all the remaining eigenvectors of this network are localized. In <xref ref-type="fig" rid="fig3">Figure 3E</xref>, we plot the decay of this network’s activity from a uniform initial condition; as predicted from the structure of the eigenvectors, decay time constants increase up the chain.</p><p>With a strong gradient of self-coupling, <xref ref-type="disp-formula" rid="equ5">Equation 4</xref> holds for all eigenvectors except those at the end of the chain, where edge effects change the shape of the eigenvectors. These eigenvectors are still localized, at the boundary, but are no longer Gaussian and appear to be better described as modulated exponentials. <xref ref-type="disp-formula" rid="equ5">Equation 4</xref> also predicts that eigenvectors become more localized as feedforward and feedback connection strengths approach each other. This is counter-intuitive, since increasing feedback strength should couple nodes more tightly. Numerically, this prediction is confirmed only when <italic>μ</italic><sub><italic>f</italic></sub> − <italic>μ</italic><sub><italic>b</italic></sub> is not close to 0. As seen in <xref ref-type="fig" rid="fig4">Figure 4</xref>, when <italic>μ</italic><sub><italic>f</italic></sub> − <italic>μ</italic><sub><italic>b</italic></sub> is small, the eigenvector is no longer Gaussian and instead shows multiple peaks. Strengthening the feedback connections leads to the emergence of ripples in the slower modes that modulate the activity of the earlier, faster nodes. While the first-order approximation of the shape of <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> breaks down in this regime, <xref ref-type="disp-formula" rid="equ5">Equation 4</xref> is locally valid in that the largest peak sharpens with increasing symmetry, as seen in <xref ref-type="fig" rid="fig4">Figure 4B</xref>.<fig id="fig4" position="float"><object-id pub-id-type="doi">10.7554/eLife.01239.007</object-id><label>Figure 4.</label><caption><title>Second-order expansion for partially-delocalized eigenvectors.</title><p>Same model with a gradient of local connectivity as in <xref ref-type="fig" rid="fig3">Figure 3</xref>. (<bold>A</bold>) Schematic of the predicted shape. Eigenvectors (black) are the product of an exponential (blue) and an Airy function (red). The constant in the exponential depends on the asymmetry between feedback (<italic>μ</italic><sub><italic>b</italic></sub>) and feedforward (<italic>μ</italic><sub><italic>f</italic></sub>) strengths. In the left panel, <italic>μ</italic><sub><italic>f</italic></sub> − <italic>μ</italic><sub><italic>b</italic></sub> is large and the product is well described by a Gaussian. In the right panel, <italic>μ</italic><sub><italic>f</italic></sub> − <italic>μ</italic><sub><italic>b</italic></sub> is small and the exponential is shallow enough that the product is somewhat delocalized. (<bold>B</bold>) Analytically predicted eigenvector shapes (solid lines) compared to numerical simulations (filled circles) for four values of <italic>μ</italic><sub><italic>b</italic></sub>. For each value of <italic>μ</italic><sub><italic>b</italic></sub> one representative eigenvector is shown. As <italic>μ</italic><sub><italic>b</italic></sub> approaches <italic>μ</italic><sub><italic>f</italic></sub>, eigenvectors start to delocalize but, as per <xref ref-type="disp-formula" rid="equ5">Equation 4</xref>, the maximum peak is sharper. <italic>β</italic><sub>2</sub> is the steepness of the exponential (<xref ref-type="disp-formula" rid="equ6">Equation 5</xref>). The network is described by <xref ref-type="disp-formula" rid="equ4">Equation 3</xref> with <italic>μ</italic><sub>0</sub> = −1.9, Δ<sub><italic>r</italic></sub> = 0.01, <italic>μ</italic><sub><italic>f</italic></sub> = 0.2, and <italic>l</italic><sub><italic>c</italic></sub> = 4. <italic>μ</italic><sub><italic>b</italic></sub> = 0.125, 0.15, 0.175, and 0.19.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.007">http://dx.doi.org/10.7554/eLife.01239.007</ext-link></p></caption><graphic xlink:href="elife01239f004"/></fig></p><p>We extend our expansion to second-order in <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> (appendix [<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>], Sections 5 &amp; 6) to predict that the eigenvector is given by<disp-formula id="equ6"><label>(5)</label><mml:math id="m6"><mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">v</mml:mi><mml:mi mathvariant="bold-italic">λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>β</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:msup><mml:mtext>Ai</mml:mtext><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mi>β</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:msubsup><mml:mi>β</mml:mi><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:mrow><mml:mrow><mml:msubsup><mml:mi>β</mml:mi><mml:mn>1</mml:mn><mml:mrow><mml:mn>2</mml:mn><mml:mo>/</mml:mo><mml:mn>3</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></disp-formula>with<disp-formula id="equ7"><label>(6)</label><mml:math id="m7"><mml:mrow><mml:msub><mml:mi>β</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mtext>Δ</mml:mtext><mml:mi>r</mml:mi></mml:msub><mml:mtext>csch</mml:mtext><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>2</mml:mn><mml:msub><mml:mi>l</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mn>4</mml:mn></mml:msup><mml:mtext>sinh</mml:mtext><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>l</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mn>3</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mi>f</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>μ</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mfrac><mml:mo> </mml:mo><mml:mtext>and</mml:mtext><mml:mo> </mml:mo><mml:msub><mml:mi>β</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mi>f</mml:mi></mml:msub><mml:mo>−</mml:mo><mml:msub><mml:mi>μ</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mtext>coth</mml:mtext><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>2</mml:mn><mml:msub><mml:mi>l</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mi>f</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>μ</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>where, Ai is the first Airy function (<xref ref-type="bibr" rid="bib29">Olver, 2010</xref>). The eigenvector is the product of an exponential and an Airy function and this product is localized when the exponential is steep (<xref ref-type="fig" rid="fig4">Figure 4A</xref>). The steepness of the exponential depends on <italic>μ</italic><sub><italic>f</italic></sub> − <italic>μ</italic><sub><italic>b</italic></sub>. When this difference is small the exponential is shallow and the trailing edge of the product is poorly localized. <xref ref-type="fig" rid="fig4">Figure 4B</xref> shows that this functional form accurately predicts the results from numerical simulations, except when the eigenvector is almost completely delocalized.</p><p>These results reveal that an asymmetry in the strength of feedforward and feedback projections can play an important role in segregation of timescales in biological systems.</p><p>The second-order expansion demonstrates that the approach is general and can be extended as needed. While the first-order expansion in <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> generically gives rise to modulated Gaussians, the functional form of the eigenvectors from a second-order expansion depends on the connectivity (appendix [<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>], Section 5) and, in general, the asymptotic decay is slower than that of a Gaussian.</p></sec><sec id="s2-2"><title>Localization in a network with a gradient of connectivity range</title><p>The previous architecture was a chain of nodes with identical inter-node connectivity but varying local connectivity. We now consider a contrasting architecture: a chain with no self-coupling but with a location-dependent bias in inter-node connectivity. We build this model motivated by the intuitive notion that nodes near the input end of a network send mostly feedforward projections, while nodes near the output send mostly feedback projections. The network architecture is shown in <xref ref-type="fig" rid="fig5">Figure 5A</xref>.<fig id="fig5" position="float"><object-id pub-id-type="doi">10.7554/eLife.01239.008</object-id><label>Figure 5.</label><caption><title>Localized eigenvectors in a network with a gradient of connectivity range.</title><p>(<bold>A</bold>) The network consists of a chain of 50 identical nodes, shown here by a schematic. Spatial length of feedforward connections (from earlier to later nodes) decreases along the chain while the spatial length of feedback connections (from later to earlier nodes) increases along the chain. The network is described by <xref ref-type="disp-formula" rid="equ8">Equation 7</xref>, with <italic>μ</italic><sub>0</sub> = −1.05, <italic>μ</italic><sub><italic>f</italic></sub> = 5, <italic>μ</italic><sub><italic>b</italic></sub> = 0.5, <italic>f</italic><sub>0</sub> = 0.2, <italic>f</italic><sub>1</sub> = 0.12, <italic>b</italic><sub>0</sub> = 6, <italic>b</italic><sub>1</sub> = 0.11. Normally-distributed randomness of standard deviation σ = 10<sup>−5</sup> is added to all connections. (<bold>B</bold>) Five sample eigenvectors, with numerical simulations (filled circles) well fitted by the analytical predictions (solid lines). Note the effect of added randomness on the rightmost eigenvector. (<bold>C</bold>) Heat map of eigenvectors on logarithmic scale. Rows correspond to eigenvectors, arranged by increasing decay time. All eigenvectors are localized, but timescales are not monotonically related to eigenvector position. (<bold>D</bold>) Dynamical response of the network to an input pulse. Long timescales are localized to nodes early in the network while nodes later in the network show intermediate timescales.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.008">http://dx.doi.org/10.7554/eLife.01239.008</ext-link></p></caption><graphic xlink:href="elife01239f005"/></fig></p><p>Connectivity decays exponentially, as in the previous example, but the decay length depends on position. Moving along the chain, feedforward decay length decreases while feedback decay length increases:<disp-formula id="equ8"><label>(7)</label><mml:math id="m8"><mml:mrow><mml:mi>W</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>for</mml:mtext><mml:mo> </mml:mo><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mi>k</mml:mi><mml:mo> </mml:mo><mml:mo> </mml:mo><mml:mo>(</mml:mo><mml:mtext>self</mml:mtext><mml:mo>-</mml:mo><mml:mtext>coupling</mml:mtext><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mi>f</mml:mi></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>for</mml:mtext><mml:mo> </mml:mo><mml:mi>j</mml:mi><mml:mo>&gt;</mml:mo><mml:mi>k</mml:mi><mml:mo> </mml:mo><mml:mo> </mml:mo><mml:mo>(</mml:mo><mml:mtext>feedforward</mml:mtext><mml:mo> </mml:mo><mml:mtext>connections</mml:mtext><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>−</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>for</mml:mtext><mml:mo> </mml:mo><mml:mi>j</mml:mi><mml:mo>&lt;</mml:mo><mml:mi>k</mml:mi><mml:mo> </mml:mo><mml:mo> </mml:mo><mml:mo>(</mml:mo><mml:mtext>feedback</mml:mtext><mml:mo> </mml:mo><mml:mtext>connections</mml:mtext><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula></p><p>The parameters <italic>f</italic><sub>0</sub>, <italic>f</italic><sub>1</sub>, <italic>b</italic><sub>0</sub>, and <italic>b</italic><sub>1</sub> control the location-dependence in decay length, <italic>μ</italic><sub>0</sub> is the leakage term, and <italic>μ</italic><sub><italic>f</italic></sub> and <italic>μ</italic><sub><italic>b</italic></sub> set the maximum strength of feedforward and feedback projections. We also add a small amount of randomness to the connection strengths.</p><p>As before we calculate the eigenvector width, <italic>α</italic>. In this case, for a wide range of the parameters in <xref ref-type="disp-formula" rid="equ8">Equation 7</xref>, <italic>α</italic><sup>2</sup> is positive and approximately constant for all eigenvectors. Therefore, all eigenvectors are localized and have approximately the same width (appendix [<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>], Section 4). Four eigenvectors are plotted in <xref ref-type="fig" rid="fig5">Figure 5B</xref> along with theoretical predictions. <xref ref-type="fig" rid="fig5">Figure 5C</xref> shows all of the eigenvectors on a heat map and demonstrates that all are localized. The fastest and slowest timescales are localized to the earlier nodes while the intermediate timescales are localized towards the end of the chain. The earlier nodes thus show a combination of very fast and very slow time courses, whereas the later nodes display dynamics with an intermediate range of timescales. Such dynamics present a salient feature of networks with opposing gradients in their connectivity profile. In <xref ref-type="fig" rid="fig5">Figure 5D</xref>, we plot the decay of network activity from a uniform initial condition; note the contrast between nodes early and late in the chain.</p><p>While the eigenvectors are all localized, different eigenvectors tend to cluster their centers near similar locations. Near those locations, nodes may participate in multiple eigenvectors, implying that time constants are not well segregated. This is a consequence of the architecture: nodes towards the edges of the chain project most strongly towards the center, so that small perturbations at either end of the chain are strongly propagated inward. The narrow spread of centers (the overlap of multiple eigenvectors) reduces the segregation of timescales that is one benefit of localization. We find that adding a small amount of randomness to the system spreads out the eigenvector centers without significantly changing the shape. This approach is more robust than fine-tuning parameters to maximally spread the centers, and seems reasonable in light of the heterogeneity intrinsic to biological systems (<xref ref-type="bibr" rid="bib32">Raser and O’Shea, 2005</xref>; <xref ref-type="bibr" rid="bib5">Barbour et al., 2007</xref>). Upon adding randomness, most eigenvectors remain Gaussian while a minority are localized but lose their Gaussian shape.</p><p>The significant overlap of the eigenvectors means that the eigenvectors are far from orthogonal to each other. Such matrices, called non-normal matrices, can show a number of interesting transient effects (<xref ref-type="bibr" rid="bib41">Trefethen and Embree, 2005</xref>; <xref ref-type="bibr" rid="bib16">Goldman, 2009</xref>; <xref ref-type="bibr" rid="bib26">Murphy and Miller, 2009</xref>). In particular we note that the dynamics of our example network show significant initial growth before decaying, as visible in the scale of <xref ref-type="fig" rid="fig5">Figure 5D</xref>.</p></sec><sec id="s2-3"><title>Randomness and diversity</title><p>As observed in the last section, the heterogeneity intrinsic to biological systems can play a beneficial role in computation. Indeed, sufficient randomness in local node properties has been shown to give localized eigenvectors in models of physical systems with nearest-neighbor connectivity, and the transition from delocalized to localized eigenvectors has been suggested as a model of the transition from a conducting to an insulating medium (<xref ref-type="bibr" rid="bib3">Anderson, 1958</xref>; <xref ref-type="bibr" rid="bib1">Abou-Chacra et al., 1973</xref>; <xref ref-type="bibr" rid="bib21">Lee, 1985</xref>). A similar mechanism should apply in biological systems. We numerically explore eigenvector localization in a network with exponentially-decaying connectivity and randomly distributed self-couplings.</p><p>The network connection matrix is given by<disp-formula id="equ9"><label>(8)</label><mml:math id="m9"><mml:mrow><mml:mi>W</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:mi mathvariant="script">N</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:msup><mml:mi>σ</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>for</mml:mtext><mml:mo> </mml:mo><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>for</mml:mtext><mml:mo> </mml:mo><mml:mi>j</mml:mi><mml:mo>≠</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>where, <inline-formula><mml:math id="inf7"><mml:mrow><mml:mi mathvariant="script">N</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:msup><mml:mi>σ</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula> is drawn from a normal distribution with mean zero and variance σ<sup>2</sup>.</p><p>As σ<sup>2</sup> increases, the network shows a transition to localization. This transition is increasingly sharp and occurs at lower values of σ as the network gets larger. <xref ref-type="fig" rid="fig6">Figure 6</xref> shows a network with sufficient randomness for the eigenvectors to localize, with sample eigenvectors shown in <xref ref-type="fig" rid="fig6">Figure 6B</xref>. These show a variety of shapes and are no longer well described by Gaussians. Importantly, there is no longer a relationship between the location of an eigenvector and the timescale it corresponds to (<xref ref-type="fig" rid="fig6">Figure 6C</xref>). Thus while each timescale is localized, a variety of timescales are present in each region of the network, and each node will show a random mixture of timescales. This is in contrast to our previous examples, which have a spatially continuous distribution of time constants. The random distribution of time constants is also observed in the decay from a uniform initial conditions, as shown in <xref ref-type="fig" rid="fig6">Figure 6D</xref>.<fig id="fig6" position="float"><object-id pub-id-type="doi">10.7554/eLife.01239.009</object-id><label>Figure 6.</label><caption><title>Localized eigenvectors in a network with random self-coupling.</title><p>(<bold>A</bold>) The network consists of 100 nodes arranged in a chain. The plot above the chain shows the connectivity profile. Self-coupling is random, as indicated by the shading. The network is described by <xref ref-type="disp-formula" rid="equ9">Equation 8</xref> with <italic>μ</italic><sub>0</sub> = −1, <italic>μ</italic><sub><italic>c</italic></sub> = 0.05, <italic>l</italic><sub><italic>c</italic></sub> = 4, σ = 0.33. (<bold>B</bold>) Four eigenvectors are shown, localized to different parts of the network. Note the diversity of profiles. (<bold>C</bold>) Heat map of eigenvectors on logarithmic scale. Rows correspond to eigenvectors, arranged by increasing decay time. All eigenvectors are localized, though the extent of localization (the eigenvector width) varies; and there is no relationship between the timescale of an eigenvector and its spatial location in the network. (<bold>D</bold>) Dynamical response of the network to an input pulse. Note that the diversity of dynamical responses is more limited, and bears no relationship to spatial location.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.009">http://dx.doi.org/10.7554/eLife.01239.009</ext-link></p></caption><graphic xlink:href="elife01239f006"/></fig></p></sec></sec><sec id="s3" sec-type="discussion"><title>Discussion</title><p>Local connectivity is insufficient to create localized temporal patterns of activity in linear networks. A network with sharply localized but translationally invariant connectivity has delocalized eigenvectors. This implies that distant nodes in the network have similar temporal activity, since they share the timescales of their dynamics. Breaking the invariance can give rise to localized eigenvectors, and we study conditions that allow this. We develop a theory to predict the shapes of localized eigenvectors and our theory generalizes to describe eigenvectors that are only partially localized and show multiple peaks. A major finding of this study is the identification of two network architectures, with either a gradient of local connectivity or a gradient of long-distance connection length, that give rise to activity patterns with localized timescales.</p><p>Our approach to eigenvector localization is partly based on <xref ref-type="bibr" rid="bib41">Trefethen and Embree (2005)</xref>; <xref ref-type="bibr" rid="bib42">Trefethen and Chapman (2004)</xref>. The authors study perturbations of translationally invariant matrices and determine conditions under which eigenvectors are localized in the large-N limit. We additionally assume that the connectivity is local, since we are interested in matrices that describe connectivity of biological networks. This allows us to calculate explicit functional forms for the eigenvectors.</p><p>We stress that the temporal aspect of the network dynamics should not be confused with selectivity across space in a neural network. Even if temporal patterns are localized, a large proportion of network nodes may be active in response to a given input, albeit with distinct temporal dynamics. Conversely, even if temporal patterns are delocalized, nodes show similar dynamics yet may still be highly selective to different inputs and any stimulus could primarily activate only a small fraction of nodes in the network.</p><p>Our results are particularly relevant to understanding networks that need to perform computations requiring a wide spread of timescales. In general, input along a fast eigenvector decays exponentially faster than input along a slow eigenvector. To see this, consider a network with a fast and a slow timescale (<inline-formula><mml:math id="inf8"><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>λ</mml:mi><mml:mrow><mml:mi>f</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="inf9"><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>λ</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>), and having initial condition with components <italic>a</italic><sub><italic>fast</italic></sub> and <italic>a</italic><sub><italic>slow</italic></sub> along the fast and the slow eigenvectors respectively. As shown in <xref ref-type="disp-formula" rid="equ12">Equation 11</xref>, the network activity will evolve as <inline-formula><mml:math id="inf10"><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>f</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>λ</mml:mi><mml:mrow><mml:mi>f</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>λ</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula>. For a node to show a significant fast timescale in the presence of a slower, more persistent timescale, the contribution of this slow timescale to the node must be small. This can happen in two ways, corresponding to the terms of <xref ref-type="disp-formula" rid="equ11">Equation 10</xref>. If the input contributes little to the slower eigenvectors then their amplitudes will be small at all nodes. This requires fine-tuned input (exponentially smaller along the slow eigenvectors) and means that the slow timescales do not contribute significantly to any node. Alternately, as in the architectures we propose, the slow eigenvectors could be exponentially smaller at certain nodes; these nodes will then show fast timescales for most inputs, with a small slow component.</p><p>The architecture with a gradient of local connectivity (<xref ref-type="fig" rid="fig3">Figure 3</xref>) may explain some observations in the larval zebrafish oculomotor system (<xref ref-type="bibr" rid="bib25">Miri et al., 2011</xref>). The authors observed a wide variation in the time constants of decay of firing activity across neurons, with more distant neurons showing a greater difference in time constants. They proposed a model characterized by a chain of nodes with linearly-decaying connectivity and a gradient of connection strengths, and found that different nodes in the model showed different timescales. Furthermore, the introduction of asymmetry to connectivity (with feedback connections weaker than feedforward connections) enhanced the diversity of timescales. This effect of asymmetry was also seen in an extension of the model to the macaque monkey oculomotor integrator (<xref ref-type="bibr" rid="bib19">Joshua et al., 2013</xref>). Our work explains why such architectures allow for a diversity of timescales, and we predict that such gradients and asymmetry should be seen experimentally.</p><p>With a gradient of local connections, time constants increase monotonically along the network chain. By contrast, with a gradient of connectivity length (<xref ref-type="fig" rid="fig5">Figure 5</xref>), the relationship between timescales and eigenvector position is lawful but non-monotonic, as a consequence of the existence of two gradients (feedforward connectivity decreases while feedback increases along the chain). The small amount of randomness added to this system helps segregate the timescales across the network, while only mildly affecting the continuous dependence of eigenvector position on timescale. This suggests that randomness may contribute to a diversity of timescales.</p><p>The connection between structural randomness and localization is well known in physical systems (<xref ref-type="bibr" rid="bib3">Anderson, 1958</xref>; <xref ref-type="bibr" rid="bib1">Abou-Chacra et al., 1973</xref>; <xref ref-type="bibr" rid="bib21">Lee, 1985</xref>). We applied this idea to a biological context (<xref ref-type="fig" rid="fig6">Figure 6</xref>), and showed that localization can indeed emerge from sufficiently random node properties. However, in this case nearby eigenvectors do not correspond to similar timescales. A given timescale is localized to a particular region of the network but a similar timescale could be localized at a distant region and, conversely, a much shorter or longer timescale could be localized in the same part of the network. Thus, the timescales shown by a particular node are a random sample of the timescales of the network.</p><p>Chemical gradients are common in biological systems, especially during development (<xref ref-type="bibr" rid="bib49">Wolpert, 2011</xref>), and structural randomness and local heterogeneity are ubiquitous. We predict that biological systems could show localized activity patterns due to either of these mechanisms or a combination of the two. Furthermore, local randomness can enhance localization that emerges from gradients or long-range spatial fluctuations in local properties. We have focused on localization that yields a smooth relationship between timescale and eigenvector position; such networks are well-placed to integrate information at different timescales. However, it seems plausible that biological networks have evolved to take advantage of randomness-induced localization, and it would be interesting to explore the computational implications of such localization. It could also be fruitful to explore localization from spatially correlated randomness.</p><p>An influential view of complexity is that a complex network combines segregation and integration: individual nodes and clusters of nodes show different behaviors and subserve different functions; these behaviors, however, emerge from network interactions and the computations depend on the flow of information through the network (<xref ref-type="bibr" rid="bib40">Tononi and Edelman, 1998</xref>). The localized activity patterns we find are one way to construct such a network. Each node participates strongly in a few timescales and weakly in the others, but the shape and timescales of the activity patterns emerge from the network topology as a whole and information can flow from one node to another. Moreover, as shown in <xref ref-type="fig" rid="fig7">Figure 7</xref>, adding a small number of long-range strong links to local connectivity, as in small-world networks (<xref ref-type="bibr" rid="bib48">Watts and Strogatz, 1998</xref>), causes a few eigenvectors to delocalize while leaving most localized. This is a possible mechanism to integrate computations while preserving segregated activity, and is an interesting direction for future research.<fig id="fig7" position="float"><object-id pub-id-type="doi">10.7554/eLife.01239.010</object-id><label>Figure 7.</label><caption><title>Strong long-range connections can delocalize a subset of eigenvectors.</title><p>(<bold>A</bold>) Left panel: connectivity of the network in <xref ref-type="fig" rid="fig3">Figure 3</xref> with long-range connections of strength 0.05 added between 10% of the nodes. The gradient of self-coupling is shown along the diagonal on another scale, for clarity. Right panel: eigenvectors shown as in panel <bold>C</bold> of <xref ref-type="fig" rid="fig3">Figure 3</xref>. (<bold>B</bold>) Left panel: connectivity of the network in <xref ref-type="fig" rid="fig5">Figure 5</xref> with long-range connections of strength 0.05 added between 10% of the nodes. Right panel: eigenvectors shown as in panel <bold>C</bold> of <xref ref-type="fig" rid="fig5">Figure 5</xref>.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.010">http://dx.doi.org/10.7554/eLife.01239.010</ext-link></p></caption><graphic xlink:href="elife01239f007"/></fig></p></sec><sec id="s4" sec-type="methods"><title>Methods</title><p>We study the activity of a linear network of coupled units, which will be called ‘nodes’. These represent neurons or populations of neurons. The activity of the <italic>j</italic>th node, <italic>ϕ</italic><sub><italic>j</italic></sub> (<italic>t</italic>), is determined by interactions with the other nodes in the network and by external inputs. It obeys the following equation:<disp-formula id="equ10"><label>(9)</label><mml:math id="m10"><mml:mrow><mml:mfrac><mml:mi>d</mml:mi><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:msub><mml:mi>ϕ</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover><mml:mo>∑</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mrow><mml:mi>W</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msub><mml:mi>ϕ</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mstyle><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>where <italic>W</italic> (<italic>j</italic>,<italic>k</italic>) is the connection strength from node <italic>k</italic> to node <italic>j</italic> of the network and <italic>I</italic><sub><italic>j</italic></sub> is the external input to the <italic>j</italic>th node. <italic>W</italic> (<italic>j</italic>,<italic>j</italic>) is the self-coupling of the <italic>j</italic>th node and typically includes a leakage term. Note that the intrinsic timescale of node <italic>j</italic> is absorbed into the matrix <italic>W</italic>.</p><p>By solving <xref ref-type="disp-formula" rid="equ10">Equation 9</xref>, <italic>ϕ</italic><sub><italic>j</italic></sub> (<italic>t</italic>) can be expressed in terms of the eigenvectors of the connection matrix <italic>W</italic>, yielding<disp-formula id="equ11"><label>(10)</label><mml:math id="m11"><mml:mrow><mml:msub><mml:mi>ϕ</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:munder><mml:mstyle displaystyle="true"><mml:mo>∑</mml:mo></mml:mstyle><mml:mi>λ</mml:mi></mml:munder><mml:msub><mml:mi>A</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">v</mml:mi><mml:mi mathvariant="bold-italic">λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula>(<xref ref-type="bibr" rid="bib34">Rugh, 1995</xref>). Here, <italic>λ</italic> indexes the eigenvalues of <italic>W</italic>, and <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> (<italic>j</italic>) is the <italic>j</italic>th component of the eigenvector corresponding to <italic>λ</italic>. These are independent of the input. <italic>A</italic><sub><italic>λ</italic></sub> (<italic>t</italic>) is the time-dependent amplitude of the eigenvector <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> and depends on the input, which determines to what extent different eigenvectors are activated. If the real parts of the eigenvalues are negative then the network is stable and, in the absence of input, <italic>A</italic><sub><italic>λ</italic></sub> (<italic>t</italic>) decays exponentially with a characteristic time of <inline-formula><mml:math id="inf12"><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mi mathvariant="fraktur">R</mml:mi><mml:mi mathvariant="fraktur">e</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>−</mml:mo><mml:mi>λ</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>.</p><p><italic>A</italic><sub><italic>λ</italic></sub> (<italic>t</italic>) consists of the sum of contributions from the initial condition and the input, so that <xref ref-type="disp-formula" rid="equ11">Equation 10</xref> can be written as<disp-formula id="equ12"><label>(11)</label><mml:math id="m12"><mml:mrow><mml:msub><mml:mi>ϕ</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:munder><mml:mstyle displaystyle="true"><mml:mo>∑</mml:mo></mml:mstyle><mml:mi>λ</mml:mi></mml:munder><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>a</mml:mi><mml:mo>˜</mml:mo></mml:mover></mml:mrow><mml:mi>λ</mml:mi></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>λ</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msubsup><mml:mrow><mml:msup><mml:mstyle displaystyle="true"><mml:mo>∫</mml:mo></mml:mstyle><mml:mo>​</mml:mo></mml:msup></mml:mrow><mml:mn>0</mml:mn><mml:mi>t</mml:mi></mml:msubsup><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>λ</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>−</mml:mo><mml:mi>t</mml:mi><mml:mo>'</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>I</mml:mi><mml:mo>˜</mml:mo></mml:mover></mml:mrow><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>'</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi><mml:mo>'</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">v</mml:mi><mml:mi mathvariant="bold-italic">λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p><p><inline-formula><mml:math id="inf13"><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>a</mml:mi><mml:mo>˜</mml:mo></mml:mover></mml:mrow><mml:mi>λ</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="inf14"><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mi>I</mml:mi><mml:mo>˜</mml:mo></mml:mover></mml:mrow><mml:mi>λ</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> are the coefficients for the initial condition and the input, respectively, represented in the coordinate system of the eigenvectors. In a stable network, each node forgets its initial condition and simultaneously integrates input with the same set of time constants.</p><p>In this work, we examine different classes of the connection matrix <italic>W</italic>, with the constraint that connectivity is primarily local, and we identify conditions under which its eigenvectors are localized in the network in such a way that different nodes (or different parts of the network) exhibit disparate timescales.</p><sec id="s4-1"><title>The functional form of localized eigenvectors from a first-order expansion</title><p>We rewrite the connectivity matrix in terms of a relative coordinate, <italic>p = j−k</italic>, as<disp-formula id="equ13"><label>(12)</label><mml:math id="m13"><mml:mrow><mml:mi>W</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p><p>Thus, <italic>c</italic> (<italic>j</italic>,2) = <italic>W</italic> (<italic>j</italic>,<italic>j</italic> − 2) indexes feedforward projections that span two nodes, and <italic>c</italic> (5,<italic>p</italic>) = <italic>W</italic> (5,5 − <italic>k</italic>) indexes projections to node 5. Note that in the translation-invariant case, <italic>c</italic> (<italic>j</italic>,<italic>p</italic>) would be independent of <italic>j</italic> (appendix [<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>], Section 1), while the requirement of local connectivity means that <italic>c</italic> (<italic>j</italic>,<italic>p</italic>) is small away from <italic>p</italic> = 0. For any fixed <italic>j</italic>, <italic>c</italic> (<italic>j</italic>,<italic>p</italic>) is defined from <italic>p = j − N</italic> to <italic>p = j − 1</italic>. We extend the definition of <italic>c</italic> (<italic>j</italic>,<italic>p</italic>) to values outside this range by defining <italic>c</italic> (<italic>j</italic>,<italic>p</italic>) to be periodic in <italic>p</italic>, with the period equal to the size of the network. This is purely a formal convenience to simplify the limits in certain sums and does not constrain the connectivity between the nodes of the network.</p><p>Consider the candidate eigenvector <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> (<italic>j</italic>) = <italic>g</italic><sub><italic>λ</italic></sub> (<italic>j</italic>) <italic>e</italic><sup><italic>iωj</italic></sup>. The dependence of <italic>g</italic><sub><italic>λ</italic></sub> on <italic>j</italic> allows the magnitude of the eigenvector to depend on position; setting this function equal to a constant returns us to the translation-independent case (see appendix [<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>], Section 1). Moreover, note that <italic>g</italic><sub><italic>λ</italic></sub> (<italic>j</italic>) depends on <italic>λ</italic>, meaning that eigenvectors corresponding to different eigenvalues (timescales) can have different shapes. For example, different eigenvectors can be localized to different degrees, and localized and delocalized eigenvectors can coexist (see <xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref> for an illustration). <italic>ω</italic> allows the eigenvector to oscillate across nodes; it varies between eigenvectors and so depends on <italic>λ</italic>.</p><p>Applying <italic>W</italic> to <bold><italic>v</italic></bold><sub><bold><italic>λ</italic></bold></sub> yields<disp-formula id="equ14"><label>(13)</label><mml:math id="m14"><mml:mrow><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>W</mml:mi><mml:msub><mml:mi mathvariant="bold-italic">v</mml:mi><mml:mi mathvariant="bold-italic">λ</mml:mi></mml:msub></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:munder><mml:mstyle displaystyle="true"><mml:mo>∑</mml:mo></mml:mstyle><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munder></mml:mrow><mml:mi>N</mml:mi></mml:mover></mml:mrow><mml:mi>W</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:munder><mml:mstyle displaystyle="true"><mml:mo>∑</mml:mo></mml:mstyle><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munder></mml:mrow><mml:mi>N</mml:mi></mml:mover></mml:mrow><mml:mi>c</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></disp-formula><disp-formula id="equ15"><label>(14)</label><mml:math id="m15"><mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mover accent="true"><mml:mrow><mml:munder><mml:mstyle displaystyle="true"><mml:mo>∑</mml:mo></mml:mstyle><mml:mrow><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>N</mml:mi></mml:mrow></mml:munder></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mover></mml:mrow><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>here, the term in brackets is no longer independent of <italic>j</italic>.</p><p>So far we have made no use of the requirement of local connectivity and, given that <italic>g</italic><sub><italic>λ</italic></sub> is an arbitrary function of position and can be different for different timescales, we have placed no constraints on the shape of the eigenvectors. By including an oscillatory term (<italic>e</italic><sup><italic>iωj</italic></sup>) in our ansatz, we ensure that <italic>g</italic><sub><italic>λ</italic></sub> (<italic>j</italic>) is constant when connectivity is translation-invariant; this will simplify the analysis.</p><p>We now approximate both <italic>c</italic> (<italic>j</italic>,<italic>p</italic>) and <italic>g</italic><sub><italic>λ</italic></sub> (<italic>j</italic> − <italic>p</italic>) to first-order (i.e., linearly):<disp-formula id="equ16"><mml:math id="m16"><mml:mrow><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>≈</mml:mo><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mfrac><mml:mrow><mml:mo>∂</mml:mo><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mo>∂</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:mfrac><mml:msub><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula><disp-formula id="equ17"><label>(15)</label><mml:math id="m17"><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>≈</mml:mo><mml:msub><mml:mi>g</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>−</mml:mo><mml:mi>g</mml:mi><mml:msub><mml:mtext>'</mml:mtext><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>where, <italic>j</italic><sub>0</sub> is a putative center of the eigenvector.</p><p>Substituting <xref ref-type="disp-formula" rid="equ17">Equation 15</xref> into <xref ref-type="disp-formula" rid="equ15">Equation 14</xref> we get<disp-formula id="equ18"><label>(16)</label><mml:math id="m18"><mml:mrow><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>W</mml:mi><mml:msub><mml:mi mathvariant="bold-italic">v</mml:mi><mml:mi>λ</mml:mi></mml:msub></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mover accent="true"><mml:mrow><mml:munder><mml:mstyle displaystyle="true"><mml:mo>∑</mml:mo></mml:mstyle><mml:mrow><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>N</mml:mi></mml:mrow></mml:munder></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mover></mml:mrow><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mfrac><mml:mrow><mml:mo>∂</mml:mo><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mo>∂</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:mfrac><mml:msub><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>−</mml:mo><mml:mi>g</mml:mi><mml:msub><mml:mtext>'</mml:mtext><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></disp-formula></p><p>We expect these approximations to be valid only locally. However, if connectivity is local then the major contribution to the sum comes from small values of <italic>p</italic>. For large values of <italic>p</italic>, <italic>g</italic><sub><italic>λ</italic></sub> (<italic>j</italic> − <italic>p</italic>) is multiplied by connectivity strengths close to 0 and so we only need to approximate <italic>g</italic><sub><italic>λ</italic></sub> for <italic>p</italic> close to 0. Similarly, in approximating <italic>c</italic> (<italic>j</italic>,<italic>p</italic>) around <italic>j = j</italic><sub><italic>0</italic></sub>, we expect our approximation to be good in the vicinity of <italic>j = j</italic><sub><italic>0</italic></sub>. However, if our eigenvector is indeed localized around <italic>j</italic><sub><italic>0</italic></sub>, then <italic>g</italic><sub><italic>λ</italic></sub> (<italic>k</italic>) is small when <inline-formula><mml:math id="inf15"><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> is large. For small <italic>p</italic>, large values of <inline-formula><mml:math id="inf16"><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> approximately correspond to large values of <inline-formula><mml:math id="inf17"><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, and so <italic>c</italic> (<italic>j</italic>,<italic>p</italic>) makes a contribution to the sum only when <italic>j</italic> ≈ <italic>j</italic><sub>0</sub>.</p><p>The zeroth-order term in <xref ref-type="disp-formula" rid="equ18">Equation 16</xref> is<disp-formula id="equ19"><mml:math id="m19"><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:munderover><mml:mstyle displaystyle="true"><mml:mo>∑</mml:mo></mml:mstyle><mml:mrow><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mi>λ</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>ω</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">v</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula></p><p>The function in parentheses is periodic in <italic>p</italic> with period <italic>N</italic> (recall that <italic>c</italic> (<italic>j</italic>,<italic>p</italic>) was extended to be periodic in <italic>p</italic>). Thus to zeroth-order <bold><italic>v</italic></bold><sub><italic>λ</italic></sub> is an eigenvector with eigenvalue<disp-formula id="equ20"><label>(17)</label><mml:math id="m20"><mml:mrow><mml:mi>λ</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>ω</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:munderover><mml:mstyle displaystyle="true"><mml:mo>∑</mml:mo></mml:mstyle><mml:mrow><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p><p>For <italic>λ</italic> to be an exact eigenvalue in <xref ref-type="disp-formula" rid="equ18">Equation 16</xref>, the higher-order terms should vanish. By setting the first-order term in this equation to 0, we obtain a differential equation for <italic>g</italic><sub><italic>λ</italic></sub> (<italic>j</italic>):<disp-formula id="equ21"><label>(18)</label><mml:math id="m21"><mml:mrow><mml:mo>−</mml:mo><mml:mi>α</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>ω</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mi>g</mml:mi><mml:msub><mml:mtext>'</mml:mtext><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula>where,<disp-formula id="equ22"><label>(19)</label><mml:math id="m22"><mml:mrow><mml:mi>α</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>ω</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>=</mml:mo><mml:mo>−</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mstyle displaystyle="true"><mml:mo>∑</mml:mo></mml:mstyle><mml:mi>p</mml:mi></mml:msub><mml:mi>p</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msub><mml:mstyle displaystyle="true"><mml:mo>∑</mml:mo></mml:mstyle><mml:mi>p</mml:mi></mml:msub><mml:mfrac><mml:mrow><mml:mo>∂</mml:mo><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mo>∂</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:mfrac><mml:msub><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p><p>Thus <italic>α</italic><sup>2</sup> is a ratio of discrete Fourier transforms at the frequency <italic>ω</italic>. Note that the denominator is a weighted measure of network heterogeneity at the location <italic>j</italic><sub>0</sub>. Also note that <italic>α</italic><sup>2</sup> can be written in terms of <italic>λ</italic> as (compare the twist condition of <xref ref-type="bibr" rid="bib41">Trefethen and Embree, 2005</xref>):<disp-formula id="equ23"><label>(20)</label><mml:math id="m23"><mml:mrow><mml:msup><mml:mi>α</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>ω</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mo>−</mml:mo><mml:mi>i</mml:mi><mml:mfrac><mml:mrow><mml:mfrac><mml:mrow><mml:mo>∂</mml:mo><mml:mi>λ</mml:mi></mml:mrow><mml:mrow><mml:mo>∂</mml:mo><mml:mi>ω</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mrow><mml:mfrac><mml:mrow><mml:mo>∂</mml:mo><mml:mi>λ</mml:mi></mml:mrow><mml:mrow><mml:mo>∂</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p><p>Solving for <italic>g</italic><sub><italic>λ</italic></sub> in <xref ref-type="disp-formula" rid="equ21">Equation 18</xref> yields<disp-formula id="equ24"><mml:math id="m24"><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>α</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>ω</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>where, <italic>C</italic><sub>1</sub> is a constant. Thus, to first-order, the eigenvector is given by the modulated Gaussian function<disp-formula id="equ25"><label>(21)</label><mml:math id="m25"><mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">v</mml:mi><mml:mi>λ</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>j</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>−</mml:mo><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>α</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>ω</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac><mml:mo>+</mml:mo><mml:mi>i</mml:mi><mml:mi>ω</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p><p>In general, <italic>α</italic> can be complex. In order for <bold><italic>v</italic></bold><sub><italic>λ</italic></sub> to be localized, <inline-formula><mml:math id="inf18"><mml:mrow><mml:mi mathvariant="fraktur">R</mml:mi><mml:mi mathvariant="fraktur">e</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msup><mml:mi>α</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula> must be positive for the corresponding values of <italic>j</italic><sub>0</sub> and <italic>ω</italic>, and we only accept an eigenvector as a valid solution if this is the case. Thus the approach is self-consistent: we assumed that there existed a localized eigenvector, combined this with the requirement of local connectivity to solve for its putative shape, and then restricted ourselves to solutions that did indeed conform to our initial assumption.</p><p>For an expanded version of this analysis along with further discussion of what the analysis provides, see the appendix (<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>), Section 2.</p></sec></sec></body><back><sec sec-type="additional-information"><title>Additional information</title><fn-group content-type="competing-interest"><title>Competing interests</title><fn fn-type="conflict" id="conf1"><p>The authors declare that no competing interests exist.</p></fn></fn-group><fn-group content-type="author-contribution"><title>Author contributions</title><fn fn-type="con" id="con1"><p>RC, Conception and design, Acquisition of data, Analysis and interpretation of data, Drafting or revising the article</p></fn><fn fn-type="con" id="con2"><p>AB, Conception and design, Acquisition of data, Analysis and interpretation of data, Drafting or revising the article</p></fn><fn fn-type="con" id="con3"><p>X-JW, Conception and design, Analysis and interpretation of data, Drafting or revising the article</p></fn></fn-group></sec><sec sec-type="supplementary-material"><title>Additional files</title><supplementary-material id="SD1-data"><object-id pub-id-type="doi">10.7554/eLife.01239.011</object-id><label>Supplementary file 1.</label><caption><p>Mathematical appendix.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.01239.011">http://dx.doi.org/10.7554/eLife.01239.011</ext-link></p></caption><media mime-subtype="pdf" mimetype="application" xlink:href="elife01239s001.pdf"/></supplementary-material></sec><ref-list><title>References</title><ref id="bib1"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Abou-Chacra</surname><given-names>R</given-names></name><name><surname>Thouless</surname><given-names>DJ</given-names></name><name><surname>Anderson</surname><given-names>PW</given-names></name></person-group><year>1973</year><article-title>A selfconsistent theory of localization</article-title><source>Journal of Physics C Solid State Physics</source><volume>6</volume><fpage>1734</fpage><lpage>1752</lpage><pub-id pub-id-type="doi">10.1088/0022-3719/6/10/009</pub-id></element-citation></ref><ref id="bib2"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ahmed</surname><given-names>B</given-names></name><name><surname>Anderson</surname><given-names>JC</given-names></name><name><surname>Douglas</surname><given-names>RJ</given-names></name><name><surname>Martin</surname><given-names>KA</given-names></name><name><surname>Whitteridge</surname><given-names>D</given-names></name></person-group><year>1998</year><article-title>Estimates of the net excitatory currents evoked by visual stimulation of identified neurons in cat visual cortex</article-title><source>Cerebral Cortex</source><volume>8</volume><fpage>462</fpage><lpage>476</lpage><pub-id pub-id-type="doi">10.1093/cercor/8.5.462</pub-id></element-citation></ref><ref id="bib3"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Anderson</surname><given-names>PW</given-names></name></person-group><year>1958</year><article-title>Absence of diffusion in certain random lattices</article-title><source>Phys Rev</source><volume>109</volume><fpage>1492</fpage><lpage>1505</lpage><pub-id pub-id-type="doi">10.1103/PhysRev.109.1492</pub-id></element-citation></ref><ref id="bib4"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Ashcroft</surname><given-names>NW</given-names></name><name><surname>Mermin</surname><given-names>DN</given-names></name></person-group><year>1976</year><source>Solid state physics</source><publisher-loc>New York</publisher-loc><publisher-name>Rinehart and Winston</publisher-name></element-citation></ref><ref id="bib5"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Barbour</surname><given-names>B</given-names></name><name><surname>Brunel</surname><given-names>N</given-names></name><name><surname>Hakim</surname><given-names>V</given-names></name><name><surname>Nadal</surname><given-names>JP</given-names></name></person-group><year>2007</year><article-title>What can we learn from synaptic weight distributions?</article-title><source>Trends in Neurosciences</source><volume>30</volume><fpage>622</fpage><lpage>629</lpage><pub-id pub-id-type="doi">10.1016/j.tins.2007.09.005</pub-id></element-citation></ref><ref id="bib6"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Benucci</surname><given-names>A</given-names></name><name><surname>Ringach</surname><given-names>DL</given-names></name><name><surname>Carandini</surname><given-names>M</given-names></name></person-group><year>2009</year><article-title>Coding of stimulus sequences by population responses in visual cortex</article-title><source>Nature Neuroscience</source><volume>12</volume><fpage>1317</fpage><lpage>1324</lpage><pub-id pub-id-type="doi">10.1038/nn.2398</pub-id></element-citation></ref><ref id="bib7"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bernacchia</surname><given-names>A</given-names></name><name><surname>Seo</surname><given-names>H</given-names></name><name><surname>Lee</surname><given-names>D</given-names></name><name><surname>Wang</surname><given-names>X-J</given-names></name></person-group><year>2011</year><article-title>A reservoir of time constants for memory traces in cortical neurons</article-title><source>Nature Neuroscience</source><volume>14</volume><fpage>366</fpage><lpage>372</lpage><pub-id pub-id-type="doi">10.1038/nn.2752</pub-id></element-citation></ref><ref id="bib8"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chance</surname><given-names>FS</given-names></name><name><surname>Abbott</surname><given-names>LF</given-names></name><name><surname>Reyes</surname><given-names>AD</given-names></name></person-group><year>2002</year><article-title>Gain modulation from background synaptic input</article-title><source>Neuron</source><volume>35</volume><fpage>773</fpage><lpage>782</lpage><pub-id pub-id-type="doi">10.1016/S0896-6273(02)00820-6</pub-id></element-citation></ref><ref id="bib9"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Constantinidis</surname><given-names>C</given-names></name><name><surname>Goldman-Rakic</surname><given-names>PS</given-names></name></person-group><year>2002</year><article-title>Correlated discharges among putative pyramidal neurons and interneurons in the primate prefrontal cortex</article-title><source>Journal of Neurophysiology</source><volume>88</volume><fpage>3487</fpage><lpage>3497</lpage><pub-id pub-id-type="doi">10.1152/jn.00188.2002</pub-id></element-citation></ref><ref id="bib10"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Dayan</surname><given-names>P</given-names></name><name><surname>Abbott</surname><given-names>LF</given-names></name></person-group><year>2001</year><source>Theoretical neuroscience</source><publisher-loc>Cambridge</publisher-loc><publisher-name>The MIT Press</publisher-name></element-citation></ref><ref id="bib11"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Elston</surname><given-names>GN</given-names></name></person-group><year>2007</year><article-title>Specialization of the neocortical pyramidal cell during primate evolution</article-title><person-group person-group-type="editor"><name><surname>Kass</surname><given-names>JH</given-names></name><name><surname>Preuss</surname><given-names>TM</given-names></name></person-group><source>Evolution of nervous systems: a comprehensive reference</source><publisher-loc>New York</publisher-loc><publisher-name>Elsevier</publisher-name><fpage>191</fpage><lpage>242</lpage></element-citation></ref><ref id="bib12"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ercsey-Ravasz</surname><given-names>M</given-names></name><name><surname>Markov</surname><given-names>NT</given-names></name><name><surname>Lamy</surname><given-names>C</given-names></name><name><surname>Van Essen</surname><given-names>DC</given-names></name><name><surname>Knoblauch</surname><given-names>K</given-names></name><name><surname>Toroczkai</surname><given-names>Z</given-names></name><name><surname>Kennedy</surname><given-names>H</given-names></name></person-group><year>2013</year><article-title>A predictive network model of cerebral cortical connectivity based on a distance rule</article-title><source>Neuron</source><volume>80</volume><fpage>184</fpage><lpage>197</lpage><pub-id pub-id-type="doi">10.1016/j.neuron.2013.07.036</pub-id></element-citation></ref><ref id="bib13"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ermentrout</surname><given-names>B</given-names></name></person-group><year>1998</year><article-title>Linearization of F-I curves by adaptation</article-title><source>Neural Computation</source><volume>10</volume><fpage>1721</fpage><lpage>1729</lpage><pub-id pub-id-type="doi">10.1162/089976698300017106</pub-id></element-citation></ref><ref id="bib14"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ganguli</surname><given-names>S</given-names></name><name><surname>Bisley</surname><given-names>JW</given-names></name><name><surname>Roitman</surname><given-names>JD</given-names></name><name><surname>Shadlen</surname><given-names>MN</given-names></name><name><surname>Goldberg</surname><given-names>ME</given-names></name><name><surname>Miller</surname><given-names>KD</given-names></name></person-group><year>2008</year><article-title>One-dimensional dynamics of attention and decision making in LIP</article-title><source>Neuron</source><volume>58</volume><fpage>15</fpage><lpage>25</lpage><pub-id pub-id-type="doi">10.1016/j.neuron.2008.01.038</pub-id></element-citation></ref><ref id="bib15"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ganguli</surname><given-names>S</given-names></name><name><surname>Huh</surname><given-names>D</given-names></name><name><surname>Sompolinsky</surname><given-names>H</given-names></name></person-group><year>2008</year><article-title>Memory traces in dynamical systems</article-title><source>Proceedings of the National Academy of Sciences of the United States of America</source><volume>105</volume><fpage>18970</fpage><lpage>18975</lpage><pub-id pub-id-type="doi">10.1073/pnas.0804451105</pub-id></element-citation></ref><ref id="bib16"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Goldman</surname><given-names>MS</given-names></name></person-group><year>2009</year><article-title>Memory without feedback in a neural network</article-title><source>Neuron</source><volume>61</volume><fpage>621</fpage><lpage>634</lpage><pub-id pub-id-type="doi">10.1016/j.neuron.2008.12.012</pub-id></element-citation></ref><ref id="bib17"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Holmgren</surname><given-names>C</given-names></name><name><surname>Harkany</surname><given-names>T</given-names></name><name><surname>Svennenfors</surname><given-names>B</given-names></name><name><surname>Zilberter</surname><given-names>Y</given-names></name></person-group><year>2003</year><article-title>Pyramidal cell communication within local networks in layer 2/3 of rat neocortex</article-title><source>Journal of Physiology (London)</source><volume>551</volume><fpage>139</fpage><lpage>153</lpage><pub-id pub-id-type="doi">10.1113/jphysiol.2003.044784</pub-id></element-citation></ref><ref id="bib18"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Honey</surname><given-names>CJ</given-names></name><name><surname>Thivierge</surname><given-names>JP</given-names></name><name><surname>Sporns</surname><given-names>O</given-names></name></person-group><year>2010</year><article-title>Can structure predict function in the human brain?</article-title><source>NeuroImage</source><volume>52</volume><fpage>766</fpage><lpage>776</lpage><pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.01.071</pub-id></element-citation></ref><ref id="bib19"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Joshua</surname><given-names>M</given-names></name><name><surname>Medina</surname><given-names>JF</given-names></name><name><surname>Lisberger</surname><given-names>SG</given-names></name></person-group><year>2013</year><article-title>Diversity of neural responses in the brainstem during smooth pursuit eye movements constrains the circuit mechanisms of neural integration</article-title><source>Journal of Neuroscience</source><volume>33</volume><fpage>6633</fpage><lpage>6647</lpage><pub-id pub-id-type="doi">10.1523/JNEUROSCI.3732-12.2013</pub-id></element-citation></ref><ref id="bib20"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Komiyama</surname><given-names>T</given-names></name><name><surname>Sato</surname><given-names>TR</given-names></name><name><surname>O’Connor</surname><given-names>DH</given-names></name><name><surname>Zhang</surname><given-names>YX</given-names></name><name><surname>Huber</surname><given-names>D</given-names></name><name><surname>Hooks</surname><given-names>BM</given-names></name><name><surname>Gabitto</surname><given-names>M</given-names></name><name><surname>Svoboda</surname><given-names>K</given-names></name></person-group><year>2010</year><article-title>Learning-related fine-scale specificity imaged in motor cortex circuits of behaving mice</article-title><source>Nature</source><volume>464</volume><fpage>1182</fpage><lpage>1186</lpage><pub-id pub-id-type="doi">10.1038/nature08897</pub-id></element-citation></ref><ref id="bib21"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname><given-names>PA</given-names></name></person-group><year>1985</year><article-title>Disordered electronic systems</article-title><source>Reviews of Modern Physics</source><volume>57</volume><fpage>287</fpage><lpage>337</lpage><pub-id pub-id-type="doi">10.1103/RevModPhys.57.287</pub-id></element-citation></ref><ref id="bib22"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Levy</surname><given-names>RB</given-names></name><name><surname>Reyes</surname><given-names>AD</given-names></name></person-group><year>2012</year><article-title>Spatial profile of excitatory and inhibitory synaptic connectivity in mouse primary auditory cortex</article-title><source>Journal of Neuroscience</source><volume>32</volume><fpage>5609</fpage><lpage>5619</lpage><pub-id pub-id-type="doi">10.1523/JNEUROSCI.5158-11.2012</pub-id></element-citation></ref><ref id="bib23"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Markov</surname><given-names>NT</given-names></name><name><surname>Ercsey-Ravasz</surname><given-names>MM</given-names></name><name><surname>Ribeiro Gomes</surname><given-names>AR</given-names></name><name><surname>Lamy</surname><given-names>C</given-names></name><name><surname>Magrou</surname><given-names>L</given-names></name><name><surname>Vezoli</surname><given-names>J</given-names></name><name><surname>Misery</surname><given-names>P</given-names></name><name><surname>Falchier</surname><given-names>A</given-names></name><name><surname>Quilodran</surname><given-names>R</given-names></name><name><surname>Gariel</surname><given-names>MA</given-names></name><name><surname>Sallet</surname><given-names>J</given-names></name><name><surname>Gamanut</surname><given-names>R</given-names></name><name><surname>Huissoud</surname><given-names>C</given-names></name><name><surname>Clavagnier</surname><given-names>S</given-names></name><name><surname>Giroud</surname><given-names>P</given-names></name><name><surname>Sappey-Marinier</surname><given-names>D</given-names></name><name><surname>Barone</surname><given-names>P</given-names></name><name><surname>Dehay</surname><given-names>C</given-names></name><name><surname>Toroczkai</surname><given-names>Z</given-names></name><name><surname>Knoblauch</surname><given-names>K</given-names></name><name><surname>Van Essen</surname><given-names>DC</given-names></name><name><surname>Kennedy</surname><given-names>H</given-names></name></person-group><year>2014</year><article-title>A weighted and directed interareal connectivity matrix for macaque cerebral cortex</article-title><source>Cerebral Cortex</source><volume>24</volume><fpage>17</fpage><lpage>36</lpage><pub-id pub-id-type="doi">10.1093/cercor/bhs270</pub-id></element-citation></ref><ref id="bib24"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Markov</surname><given-names>NT</given-names></name><name><surname>Misery</surname><given-names>P</given-names></name><name><surname>Falchier</surname><given-names>A</given-names></name><name><surname>Lamy</surname><given-names>C</given-names></name><name><surname>Vezoli</surname><given-names>J</given-names></name><name><surname>Quilodran</surname><given-names>R</given-names></name><name><surname>Gariel</surname><given-names>MA</given-names></name><name><surname>Giroud</surname><given-names>P</given-names></name><name><surname>Ercsey-Ravasz</surname><given-names>M</given-names></name><name><surname>Pilaz</surname><given-names>LJ</given-names></name><name><surname>Huissoud</surname><given-names>C</given-names></name><name><surname>Barone</surname><given-names>P</given-names></name><name><surname>Dehay</surname><given-names>C</given-names></name><name><surname>Toroczkai</surname><given-names>Z</given-names></name><name><surname>Van Essen</surname><given-names>DC</given-names></name><name><surname>Kennedy</surname><given-names>H</given-names></name><name><surname>Knoblauch</surname><given-names>K</given-names></name></person-group><year>2011</year><article-title>Weight consistency specifies regularities of macaque cortical networks</article-title><source>Cerebral Cortex</source><volume>21</volume><fpage>1254</fpage><lpage>1272</lpage><pub-id pub-id-type="doi">10.1093/cercor/bhq201</pub-id></element-citation></ref><ref id="bib25"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Miri</surname><given-names>A</given-names></name><name><surname>Daie</surname><given-names>K</given-names></name><name><surname>Arrenberg</surname><given-names>AB</given-names></name><name><surname>Baier</surname><given-names>H</given-names></name><name><surname>Aksay</surname><given-names>E</given-names></name><name><surname>Tank</surname><given-names>DW</given-names></name></person-group><year>2011</year><article-title>Spatial gradients and multidimensional dynamics in a neural integrator circuit</article-title><source>Nature Neuroscience</source><volume>14</volume><fpage>1150</fpage><lpage>1159</lpage><pub-id pub-id-type="doi">10.1038/nn.2888</pub-id></element-citation></ref><ref id="bib26"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Murphy</surname><given-names>BK</given-names></name><name><surname>Miller</surname><given-names>KD</given-names></name></person-group><year>2009</year><article-title>Balanced amplification: a new mechanism of selective amplification of neural activity patterns</article-title><source>Neuron</source><volume>61</volume><fpage>635</fpage><lpage>648</lpage><pub-id pub-id-type="doi">10.1016/j.neuron.2009.02.005</pub-id></element-citation></ref><ref id="bib27"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Newman</surname><given-names>MEJ</given-names></name></person-group><year>2003</year><article-title>The structure and function of complex networks</article-title><source>SIAM Review</source><volume>45</volume><fpage>167</fpage><lpage>256</lpage><pub-id pub-id-type="doi">10.1137/S003614450342480</pub-id></element-citation></ref><ref id="bib28"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Newman</surname><given-names>MEJ</given-names></name></person-group><year>2010</year><article-title>Networks: an introduction</article-title><publisher-loc>New York</publisher-loc><publisher-name>Oxford University Press</publisher-name></element-citation></ref><ref id="bib29"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Olver</surname><given-names>FWJ</given-names></name></person-group><year>2010</year><article-title>Chapter 9 airy and related functions</article-title><person-group person-group-type="editor"><name><surname>Olver</surname><given-names>FWJ</given-names></name><name><surname>Lozier</surname><given-names>DW</given-names></name><name><surname>Boisvert</surname><given-names>RF</given-names></name><name><surname>Clark</surname><given-names>CW</given-names></name></person-group><source>NIST handbook of mathematical functions</source><publisher-loc>New York</publisher-loc><publisher-name>Cambridge University Press</publisher-name><fpage>193</fpage><lpage>214</lpage></element-citation></ref><ref id="bib30"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Perin</surname><given-names>R</given-names></name><name><surname>Berger</surname><given-names>TK</given-names></name><name><surname>Markram</surname><given-names>H</given-names></name></person-group><year>2011</year><article-title>A synaptic organizing principle for cortical neuronal groups</article-title><source>Proceedings of the National Academy of Sciences of the United States of America</source><volume>108</volume><fpage>5419</fpage><lpage>5424</lpage><pub-id pub-id-type="doi">10.1073/pnas.1016051108</pub-id></element-citation></ref><ref id="bib31"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rajan</surname><given-names>K</given-names></name><name><surname>Abbott</surname><given-names>LF</given-names></name></person-group><year>2006</year><article-title>Eigenvalue spectra of random matrices for neural networks</article-title><source>Physical Review Letters</source><volume>97</volume><fpage>188104</fpage><pub-id pub-id-type="doi">10.1103/PhysRevLett.97.188104</pub-id></element-citation></ref><ref id="bib32"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Raser</surname><given-names>JM</given-names></name><name><surname>O’Shea</surname><given-names>EK</given-names></name></person-group><year>2005</year><article-title>Noise in gene expression: origins, consequences, and control</article-title><source>Science</source><volume>309</volume><fpage>2010</fpage><lpage>2013</lpage><pub-id pub-id-type="doi">10.1126/science.1105891</pub-id></element-citation></ref><ref id="bib33"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Romo</surname><given-names>R</given-names></name><name><surname>Brody</surname><given-names>CD</given-names></name><name><surname>Hernandez</surname><given-names>A</given-names></name><name><surname>Lemus</surname><given-names>L</given-names></name></person-group><year>1999</year><article-title>Neuronal correlates of parametric working memory in the prefrontal cortex</article-title><source>Nature</source><volume>399</volume><fpage>470</fpage><lpage>473</lpage><pub-id pub-id-type="doi">10.1038/20939</pub-id></element-citation></ref><ref id="bib34"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Rugh</surname><given-names>WJ</given-names></name></person-group><year>1995</year><source>Linear system theory</source><publisher-loc>New Jersey</publisher-loc><publisher-name>Prentice Hall</publisher-name><edition>2nd edition</edition></element-citation></ref><ref id="bib35"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Shriki</surname><given-names>O</given-names></name><name><surname>Hansel</surname><given-names>D</given-names></name><name><surname>Sompolinsky</surname><given-names>H</given-names></name></person-group><year>2003</year><article-title>Rate models for conductance-based cortical neuronal networks</article-title><source>Neural Computation</source><volume>15</volume><fpage>1809</fpage><lpage>1841</lpage><pub-id pub-id-type="doi">10.1162/08997660360675053</pub-id></element-citation></ref><ref id="bib36"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Smith</surname><given-names>MA</given-names></name><name><surname>Kohn</surname><given-names>A</given-names></name></person-group><year>2008</year><article-title>Spatial and temporal scales of neuronal correlation in primary visual cortex</article-title><source>Journal of Neuroscience</source><volume>28</volume><fpage>12591</fpage><lpage>12603</lpage><pub-id pub-id-type="doi">10.1523/JNEUROSCI.2929-08.2008</pub-id></element-citation></ref><ref id="bib37"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sporns</surname><given-names>O</given-names></name></person-group><year>2011</year><article-title>The non-random brain: efficiency, economy, and complex dynamics</article-title><source>Frontiers in Computational Neuroscience</source><volume>5</volume><fpage>5</fpage><pub-id pub-id-type="doi">10.3389/fncom.2011.00005</pub-id></element-citation></ref><ref id="bib38"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Strogatz</surname><given-names>SH</given-names></name></person-group><year>1994</year><source>Nonlinear dynamics and chaos</source><publisher-loc>New York</publisher-loc><publisher-name>Perseus Books Publishing</publisher-name></element-citation></ref><ref id="bib39"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Strogatz</surname><given-names>SH</given-names></name></person-group><year>2001</year><article-title>Exploring complex networks</article-title><source>Nature</source><volume>410</volume><fpage>268</fpage><lpage>276</lpage><pub-id pub-id-type="doi">10.1038/35065725</pub-id></element-citation></ref><ref id="bib40"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tononi</surname><given-names>G</given-names></name><name><surname>Edelman</surname><given-names>GM</given-names></name></person-group><year>1998</year><article-title>Consciousness and complexity</article-title><source>Science</source><volume>282</volume><fpage>1846</fpage><lpage>1851</lpage><pub-id pub-id-type="doi">10.1126/science.282.5395.1846</pub-id></element-citation></ref><ref id="bib42"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Trefethen</surname><given-names>LN</given-names></name><name><surname>Chapman</surname><given-names>SJ</given-names></name></person-group><year>2004</year><article-title>Wave packet pseudomodes of twisted Toeplitz matrices</article-title><source>Communications on Pure and Applied Mathematics</source><volume>57</volume><fpage>1233</fpage><lpage>1264</lpage><pub-id pub-id-type="doi">10.1002/cpa.20034</pub-id></element-citation></ref><ref id="bib41"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Trefethen</surname><given-names>LN</given-names></name><name><surname>Embree</surname><given-names>M</given-names></name></person-group><year>2005</year><source>Spectra and Pseudospectra: the behavior of Nonnormal matrices and Operators</source><publisher-loc>Princeton</publisher-loc><publisher-name>Princeton University Press</publisher-name></element-citation></ref><ref id="bib43"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Vogels</surname><given-names>TP</given-names></name><name><surname>Rajan</surname><given-names>K</given-names></name><name><surname>Abbott</surname><given-names>LF</given-names></name></person-group><year>2005</year><article-title>Neural network dynamics</article-title><source>Annual Review of Neuroscience</source><volume>28</volume><fpage>357</fpage><lpage>376</lpage><pub-id pub-id-type="doi">10.1146/annurev.neuro.28.061604.135637</pub-id></element-citation></ref><ref id="bib44"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>X-J</given-names></name></person-group><year>1998</year><article-title>Calcium coding and adaptive temporal computation in cortical pyramidal neurons</article-title><source>Journal of Neurophysiology</source><volume>79</volume><fpage>1549</fpage><lpage>1566</lpage></element-citation></ref><ref id="bib45"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>X-J</given-names></name></person-group><year>2001</year><article-title>Synaptic reverberation underlying mnemonic persistent activity</article-title><source>Trends in Neurosciences</source><volume>24</volume><fpage>455</fpage><lpage>463</lpage><pub-id pub-id-type="doi">10.1016/S0166-2236(00)01868-3</pub-id></element-citation></ref><ref id="bib46"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>X-J</given-names></name></person-group><year>2008</year><article-title>Decision making in recurrent neuronal circuits</article-title><source>Neuron</source><volume>60</volume><fpage>215</fpage><lpage>234</lpage><pub-id pub-id-type="doi">10.1016/j.neuron.2008.09.034</pub-id></element-citation></ref><ref id="bib47"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>X-J</given-names></name></person-group><year>2010</year><article-title>Prefrontal cortex</article-title><person-group person-group-type="editor"><name><surname>Shepherd</surname><given-names>GM</given-names></name><name><surname>Grillner</surname><given-names>S</given-names></name></person-group><source>Handbook of brain microcircuits</source><publisher-loc>New York</publisher-loc><publisher-name>Oxford University Press</publisher-name><fpage>46</fpage><lpage>56</lpage></element-citation></ref><ref id="bib48"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Watts</surname><given-names>DJ</given-names></name><name><surname>Strogatz</surname><given-names>SH</given-names></name></person-group><year>1998</year><article-title>Collective dynamics of ‘small-world’ networks</article-title><source>Nature</source><volume>393</volume><fpage>440</fpage><lpage>442</lpage><pub-id pub-id-type="doi">10.1038/30918</pub-id></element-citation></ref><ref id="bib49"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wolpert</surname><given-names>L</given-names></name></person-group><year>2011</year><article-title>Positional information and patterning revisited</article-title><source>Journal of Theoretical Biology</source><volume>269</volume><fpage>359</fpage><lpage>365</lpage><pub-id pub-id-type="doi">10.1016/j.jtbi.2010.10.034</pub-id></element-citation></ref></ref-list></back><sub-article article-type="article-commentary" id="SA1"><front-stub><article-id pub-id-type="doi">10.7554/eLife.01239.012</article-id><title-group><article-title>Decision letter</article-title></title-group><contrib-group content-type="section"><contrib contrib-type="editor"><name><surname>Tsodyks</surname><given-names>Misha</given-names></name><role>Reviewing editor</role><aff><institution>Weizmann Institute of Science</institution>, <country>Israel</country></aff></contrib></contrib-group></front-stub><body><boxed-text><p>eLife posts the editorial decision letter and author response on a selection of the published articles (subject to the approval of the authors). An edited version of the letter sent to the authors after peer review is shown, indicating the substantive concerns or comments; minor concerns are not usually shown. Reviewers have the opportunity to discuss the decision before the letter is sent (see <ext-link ext-link-type="uri" xlink:href="http://elife.elifesciences.org/review-process">review process</ext-link>). Similarly, the author response typically shows only responses to the major concerns raised by the reviewers.</p></boxed-text><p>Thank you for sending your work entitled “A diversity of localized timescales in network activity” for consideration at <italic>eLife</italic>. Your article has been favorably evaluated by a Senior editor and 2 reviewers, one of whom is a member of our Board of Reviewing Editors.</p><p>The Reviewing editor and the other reviewers discussed their comments before we reached this decision, and the Reviewing editor has assembled the following comments to help you prepare a revised submission.</p><p>This is a very interesting article from a theoretical perspective. It shows how a combination of non-Hermiticity and broken translation invariance can lead generically to surprisingly localized eigenfunctions. Biological implications of this result are that neuronal networks in the brain could have localized modes of activation characterized by different time scales.</p><p>The main issue that should be addressed in the revision is that, while on one hand, the analytical method for estimation of eigenvectors is the major contribution, the method is presented in an incomprehensible manner. As a result, one cannot appreciate why is it advantageous to simply diagonalize the connectivity matrix numerically (this issue is not discussed by the authors). Here is the list of points that have to be clarified.</p><p>1) <xref ref-type="disp-formula" rid="equ8">Equation (7)</xref> - it has to be solved for j<sub>0</sub> and w to find the shape of the corresponding eigenvector. How do we know the eigenvalues of the matrix? This is never explained. Moreover, in the first example considered, of <xref ref-type="disp-formula" rid="equ14">Equation (13)</xref>, the authors simply say that they ‘match the eigenvalues to j<sub>0</sub> and w, to find that w=pi’. Is there an analytical expression for the eigenvalues of the matrix of <xref ref-type="disp-formula" rid="equ14">Eq. (13)</xref>? If yes, the authors should provide it. It would make this example very special though. What if there is no such expression, would they have to diagonalize the matrix numerically? This would also provide the eigenvectors, so the whole procedure would seem to be redundant.</p><p>2) In the next example, of <xref ref-type="disp-formula" rid="equ21">Equation (18)</xref>, apparently there is no analytical expression for the eigenvalues, and the final solution for the width of the eigenvactor, <xref ref-type="disp-formula" rid="equ25">Equation (21)</xref> still depends on j<sub>0</sub> and w. The authors don’t explain how they find those.</p><p>3) In the presentation of the second-order expansion approach, the authors seem to ignore that both first-order and second-order corrections to eigenvalues have to vanish. They only consider the second order. Why does it makes sense to ignore the first-order correction?</p></body></sub-article><sub-article article-type="reply" id="SA2"><front-stub><article-id pub-id-type="doi">10.7554/eLife.01239.013</article-id><title-group><article-title>Author response</article-title></title-group></front-stub><body><p><italic>1)</italic> <xref ref-type="disp-formula" rid="equ8"><italic>Equation (7)</italic></xref> <italic>- it has to be solved for j<sub>0</sub> and w to find the shape of the corresponding eigenvector. How do we know the eigenvalues of the matrix? This is never explained. Moreover, in the first example considered, of</italic> <xref ref-type="disp-formula" rid="equ14"><italic>Equation (13)</italic></xref><italic>, the authors simply say that they ‘match the eigenvalues to j<sub>0</sub> and w, to find that w=pi’. Is there an analytical expression for the eigenvalues of the matrix of</italic> <xref ref-type="disp-formula" rid="equ14"><italic>Equation (13)</italic></xref><italic>? If yes, the authors should provide it. It would make this example very special though. What if there is no such expression, would they have to diagonalize the matrix numerically? This would also provide the eigenvectors, so the whole procedure would seem to be redundant.</italic> And</p><p><italic>2) In the next example, of</italic> <xref ref-type="disp-formula" rid="equ21"><italic>Equation (18)</italic></xref><italic>, apparently there is no analytical expression for the eigenvalues, and the final solution for the width of the eigenvactor,</italic> <xref ref-type="disp-formula" rid="equ25"><italic>Equation (21)</italic></xref> <italic>still depends on j<sub>0</sub> and w. The authors don’t explain how they find those</italic>.</p><p>We thank the reviewers for this point—the discussion of the relationship to the eigenvalues was unclear and has now been clarified. We start by stressing that the benefit of our approach is not primarily computational. The reviewers are right that, in general, our method doesn't provide a way to analytically compute the eigenvalues and, given that we consider a large class of matrices, it would be surprising if we could. Instead, our approach yields theoretical insight into the conditions that allow for eigenvector localization and how the shape of localized eigenvectors depend on network parameters.</p><p>In Supplementary file 1 (mathematical appendix), we have added an extensive discussion of what the theory yields and why it is useful. We have also clarified this in the main text and, to avoid any confusion, highlighted that our theory does not in general predict the eigenvalues of an arbitrary matrix with local connectivity. Finally, we have added a figure (<xref ref-type="fig" rid="fig3s1">Figure 3–figure supplement 1</xref>) that demonstrates how the theory picks out a region of the complex plane within which localized eigenvectors lie. We summarize these points below.</p><p>Given a network specification (i.e., connectivity profile), our analysis reveals the functional form of the eigenvector and, to first order, localized eigenvectors are Gaussians. However, the parameters of this functional form (in this case the center and width) depend on the particular connectivity profile (known) and on the eigenvalues, which are unknown. In general these eigenvalues must be separately calculated. Given a particular eigenvalue the theory tells us whether the corresponding eigenvector is localized. In this case it also yields the shape along with an analytic formula for the dependence of the shape on network parameters; this formula can be used to understand how changing network parameters promotes or hinders localization.</p><p>Our theory also identifies a region of the complex plane within which the eigenvalues lie, and tells us which of these putative timescales will correspond to localized eigenvectors. It tells us which nodes can host localized eigenvectors and how changing the parameters of the network changes the region of localized timescales. It also provides qualitative insight into factors (like translation-dependence and asymmetry) that promote localization.</p><p>In certain cases we can draw general conclusions about the shapes of all localized eigenvectors without computing the eigenvalues. For example, in the network of <xref ref-type="fig" rid="fig3">Figure 3</xref>, the eigenvector width is the same for all localized eigenvectors. This is a special example but not an unnatural one; a gradient of local properties is among the simplest deterministic ways to break translation-invariance. We also note that our theory allows us to translate constraints on the eigenvalue spectrum of the network (for example, low-rank or sharply-decaying connectivity, real eigenvalues, etc) into constraints on the shape of eigenvectors.</p><p>We elaborate on these points in Section 2 of the mathematical appendix.</p><p><italic>3) In the presentation of the second-order expansion approach, the authors seem to ignore that both first-order and second-order corrections to eigenvalues have to vanish. They only consider the second order. Why does it makes sense to ignore the first-order correction</italic>?</p><p>In all of our expansions, we require that the sum of the higher order terms vanishes. For the second-order expansion, this means that the sum of the first and second-order terms should vanish. Note that this does not necessarily mean that the terms vanish separately.</p></body></sub-article></article>