Networks of Izhikevich Neurons Synchronization and Network Inference Raul de Palma Aristides IFT-T.001/2024 Aristides, Raul de Palma A715n Networks of Izhikevich neurons: synchronization and network inference / Raul de Palma Aristides. – São Paulo, 2024 105 f.: il. Tese (doutorado) – Universidade Estadual Paulista (Unesp), Instituto de Física Teórica (IFT), São Paulo Orientador: Hilda Alicia Cerdeira 1. Sistemas não lineares. 2. Sincronização. 3. Análise de séries temporais. I. Título Sistema de geração automática de fichas catalográficas da Unesp. Biblioteca do Instituto de Física Teórica (IFT), São Paulo. Dados fornecidos pelo autor(a). NETWORKS OF IZHIKEVICH NEURONS SYNCHRONIZATION AND NETWORK INFERENCE Tese de Doutorado apresentada ao Instituto de Física Teórica do Câmpus de São Paulo, da Universidade Estadual Paulista “Júlio de Mesquita Filho”, como parte dos requisitos para obtenção do título de Doutor em Física, Especialidade Física Teórica. Comissão Examinadora: Profa. Dra. HILDA ALICIA CERDEIRA (Orientadora) Instituto de Física Teórica/UNESP Profa. Dra. ANA AMADOR Universidad de Buenos Aires & CONICET Prof. Dr. ANTONIO MARCOS BATISTA UEPG/Universidade Estadual de Ponta Grossa Prof. Dr. ANTONIO CARLOS ROQUE FFCLRP/Universidade de São Paulo Prof. Dr. NICOLAS RUBIDO School of Natural and Computing Sciences /University of Aberdeen Conceito: Aprovado São Paulo, 30 de janeiro de 2024. Networks of Izhikevich Neurons Synchronization and Network Inference Networks of Izhikevich Neurons Synchronization and Network Inference This thesis has received financial support from CAPES (Brasil) Networks of Izhikevich Neurons Synchronization and Network Inference THESIS submitted to the Institute of Theoretical Physics São Paulo State University, Brazil as a partial fulfillment of the requirements for the degree of Doctor of Science January 30th 2024 Raul de Palma Aristides Hilda Alicia Cerdeira (supervisor) Examination Committee Titulars: Prof. Dr. Hilda Alicia Cerdeira (Supervisor) Instituto de Física Teórica, Universidade Estadual de São Paulo Prof. Dr. Ana Amador Departamento de Física, Universidad de Buenos Aires & CONICET Prof. Dr. Antonio Marcos Batista Departamento de Física, Universidade Estadual de Ponta Grossa Prof. Dr. Antonio Carlos Roque Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo Prof. Dr. Nicolas Rubido School of Natural and Computing Sciences, University of Aberdeen Substitutes: Prof. Dr. Antonio Pons Departament de Física, Universidad Politécnica de Cataluña · Barcelona Tech Prof. Dr. Johann H. Martínez Departamento de Análisis Matemático y Matemática Aplicada , Universidad Complutense de Madrid Prof. Dr. Roberto Kraenkel Instituto de Física Teórica, Universidade Estadual de São Paulo Instituto de Física Teórica - UNESP R. Dr. Bento Teobaldo Ferraz 271, Bloco II 01140-070 São Paulo, Brasil “ ¿Qué duda cabe que el mundo que conocemos es el resultado del reflejo de la parte de cosmos del horizonte sensible en nuestro cerebro? Este reflejo unido, contrastado, con las imágenes reflejadas en los cerebros de los demás hombres que han vivido y que viven, es nuestro conocimiento del mundo, es nuestro mundo. ” Pio Baroja vii Acknowlegments The first person to thank is my mother, Rosa Maria, for always supporting me in my academic career and setting a great example for me. Secondly, my Ph.D. advisor, Hilda Cerdeira, for accepting me as a student and always being kind and encouraging. Although I feel like I did not answer the majority of Hilda’s questions, I am sure I have learned a lot by pursuing the answers. This thesis is the result of some of these attempts. I would also like to express my gratitude to Cristina Masoller, who hosted me during my secondment at the Universitat Politècnica de Catalunya in Terrassa and has always been very kind and helpful. Trying to keep up with Cristina’s sharp mind was not easy, but I have grown a lot as a researcher in the process. Another person who must be remembered is Giulio Tirabassi, whose patience and kindness made my learning journey in time series analysis much easier. I cannot count how many times I entered his office, eager to discuss a new result or a doubt, and he was always willing to help. Likewise, I am thankful to Antonio Pons, who has an endless enthusiasm for research. A special thanks to all my colleagues at IFT, UPC, and the friends that I have made during my Ph.D.: Antonio, Cassiano, Dennis, Eggon, Enzo, Isabela, Lucas, Luis, Maria, Madhu, Pol, Rafael, and Sophie. You guys made me feel at home even when I was far from it. I also extend my appreciation to my dear friends Aron, Bárbara, Enrique, Edson, and Vitor. Altogether, their company made this journey much easier. Similarly, my gratitude goes to my brothers Giancarllo and Vinícius. Last but not least, I am very thankful to my partner, Mônica, for her endless support and love. Finally, I would like to express my gratitude to the examination committee, especially Prof. Nicolas Rubido. I would also like to thank IFT for providing great physical infrastructure and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) for financing it under Finance Code 001. viii Abstract In this thesis, we delve into two interconnected realms: the stability of synchronized states in networks of Izhikevich neurons and the challenge of network inference within complex systems. The motivation stems from the ubiquity of complex networks in nature, where synchronization often occurs, presenting an intricacy when the underlying network structure is unknown. The analytical foundation of our investigation lies in the application of the Master Stability Function (MSF) formalism to study the stability of synchronized states in networks of Izhikevich neurons. Despite the Izhikevich model’s widespread use, this study represents the first application of the MSF to the Izhikevich model, which was made possible by using the Saltation Matrices approach. Through this analytical framework, we explore total synchronized states and cluster synchronized states in networks with electrical and chemical couplings. Notably, the study reveals the nuanced behavior of synchronization, with the presence of a riddled basin of attraction near synchronization thresholds. The MSF, while a valuable tool, is shown to be susceptible to discrepancies, emphasizing the need for a cautious interpretation. Transitioning to the second part of the thesis, we address the problem of network inference using the Unscented Kalman filter (UKF). The UKF is first tested on an isolated Izhikevich neuron, demonstrating remarkable accuracy in inferring parameters even under the influence of varying input currents. Extending the application to networks, the UKF successfully infers both electrical and electrical-chemical coupling structures. In a novel approach, we employ the UKF for network inference based on event timing data. Associating a phase with timing events, we assimilate this information into the Kuramoto model, revealing in some cases, superior performance compared to mutual information and cross-correlation on synthetic data. However, computational costs make the approach impractical for large networks. Keywords: Synchronization; Izhikevich Neuron; Master Stability Function; Kalman Filter; Complex Networks; Network Inference. Knowledge Area: Physics; Non-linear Dynamics; Complex Networks; Time Series Analysis. ix Resumo Nesta tese, exploramos dois domínios interconectados: a estabilidade de estados sincronizados em redes de neurônios Izhikevich e o desafio da inferência de redes em sistemas complexos. A motivação advém da ubiquidade de redes complexas na natureza, onde a sincronização frequentemente ocorre, apresentando uma complexidade quando a estrutura subjacente da rede é desconhecida. A base analítica de nossa investigação reside na aplicação do formalismo da Função de Estabilidade Mestra (MSF) para estudar a estabilidade de estados sincronizados em redes de neurônios Izhikevich. Apesar do uso generalizado do modelo Izhikevich, este estudo representa a primeira aplicação da MSF ao modelo Izhikevich, tornada possível pelo uso da abordagem de matrizes de saltos. Através deste arcabouço analítico, exploramos estados totalmente sincronizados e estados sincronizados em aglomerados em redes com acoplamentos elétricos e químicos. Notavelmente, o estudo revela o comportamento sutil da sincronização, com a presença de uma bacia de atração riddled próxima aos limiares de sincronização. A MSF, embora uma ferramenta valiosa, mostra-se suscetível a discrepâncias, enfatizando a necessidade de uma interpretação cuidadosa. Transitando para a segunda parte da tese, abordamos o problema da inferência de redes usando o filtro de Kalman Unscented (UKF). O UKF é primeiro testado em um neurônio Izhikevich isolado, demonstrando uma precisão notável na inferência de parâmetros mesmo sob a influência de correntes de entrada variáveis. Estendendo a aplicação para redes, o UKF infere com sucesso as estruturas de acoplamento elétrico e elétrico-químico. Em uma abordagem inovadora, empregamos o UKF para inferência de redes com base em dados de temporização de eventos. Associando uma fase a eventos de temporização, assimilamos essa informação no modelo de Kuramoto, revelando em alguns casos, desempenho superior em comparação com informações mútuas e correlação cruzada em dados sintéticos. No entanto, os custos computacionais tornam a abordagem impraticável para grandes redes. Palavras Chaves: Sincronização; Neurônios de Izhikevich; Função de Estabilidade Mestra; Filtro de Kalman; Redes Complexas; Inferência de redes. Áreas do Conhecimento: Física; Dinâmica Não-Linear; Redes Complexas; Análise de Séries Temporais. x Contents I Introduction 1 1 Introduction 2 1.1 Izhikevich Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Network inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 II Synchronization 8 2 Synchronization of Chaotic Networks 9 2.1 Stability and Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Master Stability Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 MSF applied to Cluster Synchronization . . . . . . . . . . . . . . . . . . . . . . 14 2.3.1 External Equitable Partitions . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.2 Simultaneous Block Diagonalization . . . . . . . . . . . . . . . . . . . 17 3 Master Stability Functions for Networks of Izhikevich neurons 19 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 MSF applied to Izhikevich Model . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2.1 Electrical Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2.2 Chemical Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 xi 3.2.3 Chemical and Electrical Coupling . . . . . . . . . . . . . . . . . . . . . 25 3.3 Izhikevich Model: Partial Synchronization . . . . . . . . . . . . . . . . . . . . 28 3.3.1 Electrical Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.3.2 Chemical Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.3.3 Electrical and Chemical Coupling . . . . . . . . . . . . . . . . . . . . . 32 3.4 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 III Network Inference 37 4 Kalman filters 38 4.1 From Tracking to Network Inference . . . . . . . . . . . . . . . . . . . . . . . 38 4.2 The g-h Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 The Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.4 The Unscented Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.5 Estimating Parameters with the Kalman Filter . . . . . . . . . . . . . . . . . . 47 5 Parameter and Network Inference with the Unscented Kalman Filter 49 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.2 The Unscented Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . 51 5.2.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.3.1 Estimation of the parameters of a single neuron . . . . . . . . . . . . 53 5.3.2 Estimation of network connectivity . . . . . . . . . . . . . . . . . . . 56 5.3.3 Estimation of network connectivity in temporal networks . . . . . . . 57 5.4 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 6 Inferring the Connectivity from Event Timing Analysis 60 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.2 Models, Datasets and Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 6.2.1 Izhikevich model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 6.2.2 Chaotic electronic circuits . . . . . . . . . . . . . . . . . . . . . . . . . 63 6.2.3 Phase description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 6.2.4 UKF assimilation to the Kuramoto model . . . . . . . . . . . . . . . . . 65 xii 6.2.5 Network inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 6.2.6 Performance quantification . . . . . . . . . . . . . . . . . . . . . . . . 67 6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 6.3.1 Izhikevich neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 6.3.2 Chaotic electronic circuits . . . . . . . . . . . . . . . . . . . . . . . . . 71 6.4 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 IV Conclusion 73 7 Conclusion and final remarks 74 V Supplementary Materials 76 A Supplementary Material to Chapter 3 77 A.1 Derivation of MSF for the chemical coupling . . . . . . . . . . . . . . . . . . . 77 B Supplementary Material to Chapter 4 81 B.1 Selection and properties of the sigma points . . . . . . . . . . . . . . . . . . . 81 C Supplementary Material to Chapter 5 82 C.1 Topologies and synchronization quantification . . . . . . . . . . . . . . . . . . 82 D Supplementary Material to Chapter 6 86 D.1 Estimation of the average frequency ω̄ . . . . . . . . . . . . . . . . . . . . . . 86 D.2 Influence of the network size and link density . . . . . . . . . . . . . . . . . . 87 D.3 Influence of the length of the time series . . . . . . . . . . . . . . . . . . . . . 87 D.4 Computational costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Bibliography 95 Research Activities 106 xiii List of Figures 1.1 Some of the neuronal activity types that the IM (1.1) is capable of displaying for different values of (a, b, c, d) in response to a step of current I(t). Adapted from [3]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Framework for network inference using information (dynamics) obtained through measurements. Adapted from [29]. . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 Maximum Lyapunov exponent transverse to the synchronization manifold Λmax, as a function of the coupling strength σ and the corresponding eigen- value γ for N = 2 Rössler oscillators with diffusive coupling. . . . . . . . . . . 14 2.2 Example of simultaneous block diagonalization for a network of N = 10 ele- ments. In (a), each color represents a solution, so the cluster state hasm = 6 clusters, a cluster with five nodes (magenta) and five other cluster with only one node. As a result, in (b), the block Qm s that represents the perturbations parallel to the synchronized manifold is 6× 6. Adapted from [84]. . . . . . . . 17 3.1 Maximum Lyapunov exponent transversal to the synchronization manifold for electrically coupled Izhikevich neurons as a function of gγ. . . . . . . . . . 22 3.2 Undirected network withN = 4 nodes, which is used to exemplify the stabil- ity of complete synchronization for identical Izhikevich neurons. . . . . . . . . 22 3.3 Synchronization error for a network of N = 4 Izhikevich neurons (3.2) with electrical coupling. It goes to zero at σ ≈ 0.133, in conformity with the MSF. The solid line represents the average of 100 realizations, the shaded area repre- sents the first and third quartiles, and the grey dashes represent the maximum and minimum values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4 Maximum Lyapunov exponent transversal to the synchronization manifold for N = 4 Izhikevich neurons with chemical coupling. . . . . . . . . . . . . . 24 xiv 3.5 Synchronization error for a network of N = 4 Izhikevich neurons (3.2) with chemical coupling. As expected from the MSF analysis, the system does not synchronize. The solid line represents the average of 100 realizations, the shaded area represents the first and third quartiles and the grey dashes repre- sent the maximum and minimum values . . . . . . . . . . . . . . . . . . . . . 25 3.6 Maximum Lyapunov exponent transversal to the synchronization manifold for N = 4 Izhikevich neurons with electrical and chemical coupling. . . . . . 26 3.7 Synchronization error for N = 4 with electrical and chemical coupling. The solid line represents the average of 200 realizations, the shaded area represents the first and third quartiles, and the grey dashes represent the maximum and minimum values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.8 Riddled basin of attraction in (x1, x3) plane for g = 0.155, calculated with 10.24×104 initial conditions in (a)x1, x2 ∈ [1,−1] and (b)x1, x2 ∈ [0.01,−0.01]. Solutions that reach a final state with a synchronization errorErr > 0.05 are marked as white dots; otherwise, they are marked as black. . . . . . . . . . . . 27 3.9 The fraction of pairs of initial conditions that converge to different basins of attraction, f , as a function of the distance between initial condition ε - Eq, (3.20). For g = 0.1555, the slope of the line that best fits the points yields an uncertainty exponent α = 0.005. . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.10 Network studied in this section, where the clusters are indicated with colors. . 29 3.11 Maximum Lyapunov exponent transversal to the synchronization manifold forN = 8 Izhikevich neurons with electrical coupling under partial synchro- nization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.12 Synchronization error for Izhikevich model with electrical coupling, partial synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.13 Maximum Lyapunov exponent transversal to the synchronization manifold forN = 8 Izhikevich neurons with chemical coupling partial synchronization (N=8). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.14 NEWSynchronization error for Izhikevichmodelwith chemical coupling, par- tial synchronization (N=8). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.15 Maximum Lyapunov exponent transversal to the synchronization manifold for N = 8 Izhikevich neurons with chemical and electric coupling, partial synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.16 Synchronization error as a function of g for Izhikevich model with electrical and chemical coupling. The CSS considered is the one Fig. 3.10. The aver- age over 150 runs, with nearly identical initial conditions, is depicted with its bounds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.17 Riddled basin of attraction in (x5, x6) plane for g = 0.155. (a) is calculated with 10.24 × 104 initial conditions in x5, x6 ∈ [1,−1]. (b) same as (a) for g = 0.195. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 xv 3.18 Time series of the x-variable from nodes 5 (solid red) and 6 (dashed black) for different initial conditions, both with g = 0.155. (a) Incomplete synchroniza- tion Err =, (b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.19 The fraction of pairs of initial conditions that converge to different basins of attraction, f , as a function of the distance between initial condition ε - Eq, (3.20). For g = 0.1555, the slope of the line that best fits the points yields an uncertainty exponent α = 0.007. . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.1 Diagram of the g-h filter. The second subscript tells us the time of the mea- surement used in the estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 (a) Distribution of five thousand points according to (4.35). (b) Result of ap- plying the nonlinear system (4.37) to the distribution. The real mean is rep- resented by the blue square. The mean calculated from the linearized version (4.39) is shown in red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3 (a) One thousand points drawn from the distribution (4.35) plotted in greys and five sigma points plotted in magenta. (b) The transformed distribution and the transformed sigma points. The mean of the new distribution is plotted in green. 46 5.1 (a) Time evolution of the variables x(t) (upper curve) and y(t) (lower curve) of an isolated Izhikevich neuron, simulated with Eq. (5.1) with parameters given in Table 5.1. (b) Response to an external modulated current I(t) = Is + α sin(ωt). In each panel, the current input is represented by the dashed line. The values of the parameters are given in Table 5.1. . . . . . . . . . . . . 53 5.2 Parameter estimation for a single neuron as a function of the simulation time. The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter val- ues are given in Table 5.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5.3 Parameter estimation for a single neuron with a time-dependent external cur- rent, which is modeled as a constant input. The colored thick lines represent the median of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles and the upper and lower whiskers of the bars represent the maxi- mum and minimum values. The parameter values are given in Table 5.1. . . . 55 xvi 5.4 As Fig. 5.3, but explicitlymodeling the input current as I = Is+α sin(ωt). The colored thick lines represent themedian of the estimations computed from 100 runs. The shaded regions represent the first and third quartiles. The dashed lines mark the true values of the parameters. The inset in each subplot shows the distribution of the final estimations. The orange line is the median, the box marks the first and third quartiles and the upper and lower whiskers of the bars represent the maximum and minimum values. The parameter values are given in Table 5.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.5 Evolution of the distanceD, which represents the Euclidean distance between real coupling matrix Ge and the estimated one GKF e . N1, N2, N3, N4, and N5 represent different topologies, as shown in Fig. C.1. For all topologies D decreases sharply at first, then it saturates below 10−2. The coupling strength is ge = 0.05 and the dynamics displayed by the networks are shown in Fig. C.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.6 Evolution of the distance D for (a) electrical coupling and (b) chemical cou- pling, with coupling strengths ge = 0.1 and gc = 0.05. The distance be- tween the original and the estimated adjacency matrices, D(G,GKF ), de- creases sharply with simulation time, saturating below 10−2 for all network topologies. The dynamics displayed by the networks are shown in Fig. C.2. . . 57 5.7 Evolution of the distance D between the adjacency matrix and the estimated one in the case of time-dependent coupling. D is depicted as a function of time, for different coupling strengths. The vertical dashed line represents the instant in which the coupling is turned on and the horizontal one marks the zero. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 6.1 Spiking activity of a randomnetwork of 6 Izhikevich neurons and 8 linkswhen the neurons are (a) uncoupled (K = 0), (b) weakly coupled (K = 0.01), (c), (d) strongly coupled (K =0.04 and 0.1 respectively). . . . . . . . . . . . . . . . 62 6.2 Network R7 of electronic circuits [143] that has N = 28 circuits (nodes) and 42 undirected links. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 6.3 Oscillatory activity of 28 electronic circuits (in color code, experimentally recorded x variable) when the circuits are (a) uncoupled (K = 0), (b) weakly coupled (K = 1) and (c, d) strongly coupled (K = 10 andK = 50 respectively). 63 6.4 Example of a phase time series, ϕ(t), obtained from a voltage time series, x(t). (a) x(t). (b) Spikes occur when x reaches the reset condition (x > 30mV). (c) For each spike time, tn, we define the phase as ϕ(tn) = 2nπ and interpolate linearly between consecutive spikes times. . . . . . . . . . . . . . . . . . . . . 64 6.5 (a) Spike times of a random network ofN = 6 Izhikevich neurons with 8 links and couplingK = 0.04. (b) Inferred coupling coefficients using the UKF. The blue lines represent the coefficients of existing links and the red lines, those of non-existing links. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 xvii 6.6 Performance of the inference methods based on UKF,MI , and CC as a func- tion of the coupling strength, K , for a network of Izhikevich neurons. Box- plot statistics are obtained by inferring 15 different network topologies of 6 nodes and 8 links. Black circles represent the average score for each coupling coefficient. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 6.7 As Fig. 6.6 but the network is inferred from ⟨mij⟩s, ⟨MIij⟩s and ⟨CCij⟩s averaged over s = 36 simulations (a, b, c) or s = 99 simulations (d, e, f). . . . 69 6.8 Examples of good (b) and poor (c) UKF reconstruction of the network shown in panel (a). In (b) and (c) the dashed lines indicate links that were not inferred (false negatives). The coupling strength isK = 0.08. . . . . . . . . . . . . . . 70 6.9 Performance of UKF, MI , and CC inference methods for the network of 28 chaotic electronic circuits. Box-plot statistics are obtained from 18 data seg- ments and the black circles indicate the average score. The solid lines repre- sent the AUCwhen the network is inferred from ⟨mij⟩s, ⟨MIij⟩s and ⟨CCij⟩s, where the average is over s = 18 non-overlapping segments. The shaded area in panel a) represents the region where the oscillators have nearly identical frequencies and the best UKF performance is obtained by selecting ω̄ = 0. . . 71 C.1 The five network topologies used when considering only electrical coupling (Eqs. (3.1) and (6.3)). The links between nodes are represented by black lines. For each case, we show the network’s dynamics with ge = 0.05. The insets show the Kuramoto order parameter R and the synchronization error Err. . 84 C.2 The five network topologies used when considering both electrical and chem- ical coupling (Eqs. (3.1), (6.3), and (5.4)), the undirected links are electrical, and the directed links are chemical. The left column displays the network’s dynamics with ge = 0.10 and gc = 0.05. The insets show the Kuramoto order parameter R and the synchronization error Err. . . . . . . . . . . . . . . . . . 85 D.1 Comparison of the UKF performance with three methods for estimating the average frequency ω̄, in colored lines. The grey lines display theMI and CC performances. (a) For the electronic circuits, AUC is used to quantify perfor- mance. (b) For the Izhikevich neurons, F1 is used to quantify performance. . 89 D.2 Distribution of average inter-event-intervals of the electronic circuits (a) and average inter-spike-intervals of a network of N = 6 Izhikevich neurons with 8 links (b) as a function of the coupling strength K . The horizontal dashed line marks the average value at zero coupling. . . . . . . . . . . . . . . . . . . 90 D.3 Performance of UKF,MI , and CC for Izhikevich neurons, considering differ- ent network sizes. The values of p andK are such that, for each network size N , the average Kuramoto order parameter is R̄ = 0.85. . . . . . . . . . . . . . 91 D.4 Performance of UKF,MI , andCC algorithms for two link densities as a func- tion of the coupling strength. Each curve represents the F1 score averaged over the different networks and different trials. (a) Results for link density p = 0.25 and 7 networks, (b) for p = 0.75 and 9 networks. . . . . . . . . . . . 92 xviii D.5 Performance of UKF,MI , and CC as a function of the coupling strength K for networks of N = 6 Izhikevich neurons, when the time series length is (a) L = 10000, (b) L = 20000. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 D.6 Comparison of the computational time of UKF, MI and CC algorithms for three network sizes, N . For each N , 20 networks with link density p = 0.5 were used. Each boxplot represents the CPU time required to infermij ,MIij , and CCij . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 xix List of Tables 5.1 Dimensionless parameters[132] used in the simulations of the IM. . . . . . . . 51 5.2 Initial guesses of the parameters used by the UKF. . . . . . . . . . . . . . . . . 51 6.1 Dimensionless parameters used to simulate the Izhikevich model [88, 132] . . 62 6.2 Parameters used in the UKF algorithm [112, 137] for the assimilating the phase time series of the neurons (of the chaotic circuits) to the Kuramoto model. . . 66 xx Part I Introduction 1 1 Introduction 1.1 Izhikevich Model Neurons are the building blocks of the brain. These structures can display a rich variety of dynamics depending on their type and the inputs that it receives from other neurons [1]. Although many features of neurons remain unclear, models are capable of reproducing their observed dynamics. The seminal work of Hodgkin-Huxley (HH) introduced a neuron model based on the giant axon of the squid [2]. As it is biophysically accurate, this model has become a standard for ordinary differential equation (ODE)-based neuron models. Naturally, the collective dynamics of neurons are also of interest, and unfortunately, networks of complex models like the HH can be difficult to deal with both, analytically and computationally. With this in mind, E. M. Izhikevich proposed a neuron model that combines biological features of complex models, like the HH, with computational efficiency [3]. The equations of the model are the following:[ x(t) y(t) ] = [ 0.04x2 + 5x+ 140− y + I(t) a(bx− y) ] (1.1) with the after-spike reset given by if x ≥ 30, then { x→ c y → y + d (1.2) Where the variables x and y represent the membrane potential and a membrane recovery variable respectively. a is associated with the time scale of y while b describes the sensibility of x to subthreshold oscillations of y. The parameters c and d are related to the after-spike dynamics of the variables x and y. The parameter I takes into account synaptic currents or injected dc-currents [3]. The different types of neuronal activity that the Izhikevich model (IM) can emulate are shown in Fig. 1.1. 2 1.2. Synchronization 3 Figure 1.1: Some of the neuronal activity types that the IM (1.1) is capable of displaying for different values of (a, b, c, d) in response to a step of current I(t). Adapted from [3]. Due to these characteristics, the IM has been implemented in the study of networks of spiking neurons, including large-scale models [4, 5]. Such models can be used to deepen our understanding of how the interplay between synaptic and neuronal processes produces collective behaviors. Of great interest is the emergence of rhythms and synchronization of neural activity, as it suggests that the high-dimensional dynamics of neuronal networks can collapse into low-dimensional oscillatory modes [6]. 1.2 Synchronization Synchronization occurs when a set of oscillators, interacting with each other (coupled) in some way, adjust their rhythms due to this interaction. As an example, we can mention the story of the Dutch researcher Christiaan Huygens, who observed and described the phenomenon of synchronization in the 17th century. What fascinated Huygens was the fact that the pendulum swings of two clocks fixed on the same wooden beam, reached a state of complete agreement, so much so that their sounds were heard simultaneously. This state was disrupted by interference, but after a short period, it was restored. As he pointed out, the explanation for this phenomenon is that the movement of one clock’s pendulum transmitted vibrations through the beam to which it was attached and consequently to the case of the other clock, affecting the other pendulum’s motion [7]. However, synchronization goes far beyond the clocks observed by Huygens, probably being the first non-linear effect studied scientifically [8]. It occurs daily in our bodies and nature. This is because biochemical and biophysical rhythms are omnipresent features in living beings, from rapid oscillations in cell membranes to the slow cycles of ovulation in mammals and circadian cycles [9]. Since Huygens’ studies on synchronization, many other researchers have tackled this problem and nowadays synchronization is an established field of research in nonlinear 4 Chapter 1. Introduction sciences. From the analytical part, we highlight the work of Winfree, who saw synchronization as the result of a cooperative phenomenon. Drawing an analogy between synchronization and order from statistical physics, he described the transition to synchronization as a phase transition [10, 11]. Likewise, the work of Kuramoto, who is responsible for the first mathematically tractable model for synchronization [12, 13]. Finally, Pecora and Carroll’s work summarizes the efforts to study the stability of synchronized states [14]. While the latter is commonly employed in studying synchronized states, in this thesis we present the first attempt to apply it to networks of Izhikevich neurons. This holds significance as synchronization plays a crucial role in neuroscience, with studies suggesting its importance for communication between cortical areas and information processing [15–18]. However, synchronization is also associated with various brain pathologies such as Parkinson’s disease, Alzheimer’s, schizophrenia, and epilepsy [19, 20]. The case of epilepsy may be related to the presence of domains of synchronized activity coexisting with domains of asynchronous brain activity, known as chimeras [21, 22]. Since the observed patterns of synchronization in nature are far from being fully understood, their study is of great relevance. But, to bridge the gap between synchronization in models and nature, unavoidably, we have to delve into the data from experiments and measurements, as time series analysis allows us to learn many characteristics of a system. We can identify, for example, whether the system in question is periodic, stochastic, or chaotic, and also the interaction and synchronization between elements [23–25]. And, more importantly, as we will study in this thesis, the analysis can also help us to have a better understanding of the underlying structure of these systems, i.e., the networks that encode the interactions between the elements of the system under study. 1.3 Network inference Unraveling the structure of a network from observed data is a crucial open challenge in complexity science and, mathematically, an undetermined problem [26], implying that a singular solution does not exist [27]. The inherent significance of this problem is inevitable, given that many natural systems can be perceived as complex networks. Consequently, the interactions dictated by these networks impact the data obtained from such systems [28]. Among the many applications of network inference [29], like epidemiology [30], ecology [31], and social networks [32], one of the most famous is computational biology - which elucidates interactions between genes and cellular processes [33]. In this framework networks can help when describing processes, which are thought of as links of the networks, between proteins, enzymes, and genes, the nodes of the resulting networks. Climate networks, in turn, encode relationships within environmental subsystems with complex dynamics like ecosystems and the atmosphere [34, 35]. The study of such networks can be helpful in understanding and mitigating the effects of climate change. In the field of neuroscience, one can explore networks that span various scales, from expansive cortical areas to intricate individual neurons [36, 37]. While significant advancements have been achieved in exploring the connection between brain topology and 1.3. Network inference 5 dynamics, researchers have yet to attain a comprehensive understanding. [38, 39]. Thus, the study of such networks is important to better understand the functioning of the brain and its disorders. Figure 1.2: Framework for network inference using information (dynamics) obtained through mea- surements. Adapted from [29]. As represented in Fig. 1.2, the real network is not usually observed. This means we can say that the links are hidden variables, and we can only perform measurements on the units of the system (in the best-case scenario one can measure all the units) which will give us some feature (variable) of the system. With this in hand, we try to perform the network inference. One of the challenges of network inference is that, when using time series data, usually only one or a few variables can be observed (often not simultaneously), the observed variables can be recorded during a limited time interval, with limited temporal resolution, and with often unavoidable observational noise [27, 29]. On top of that, many times we rely solely on the data, without having a model for our system. Thus, deciding on a model that best encodes the network relationships of the variables observed becomes rather non-trivial. In such cases, researchers may end up using ad-hoc tuning of network model parameters [27, 28]. Naturally, it comes as no surprise that the list of methods proposed for network inference is as long as the number of applications for it. A popular approach is to perform some analysis between the time series of each node, like measuring the correlation between them, and the result one obtains is a matrix representing the measurement between all pairs of nodes. Then, a matrix representing the connections can be obtained by thresholding the covariance matrix. This approach can be used with other types of measures of similarity. Moreover, we also find dedicated methods based on machine learning algorithms[40–42] bayesian inference[29, 43, 44] model-based [45–47] or model-independent approaches [48–54], among other types [55–59]. Here, we will build upon the work presented in [60], which utilized the Kalman filter [61] as a data assimilation tool to infer the connectivity of a network of Rössler oscillators [62]. In this continuation, our emphasis will shift to the study of networks of Izhikevich neurons. 6 Chapter 1. Introduction 1.4 Objectives As we have seen, synchronization is present in many systems in nature that can be modeled as networks. On top of that, information about the underlying network is often missing, which can limit the analysis of such systems, and inferring the network from data is a challenging task. With this in mind and drawing inspiration from neuroscience, the goal of this thesis is to explore these two themes using the IM as our framework. Part of the motivation stems from the fact that the examination of synchronization in networks of IM has yet to incorporate the application of the Master Stability Function. Additionally, our objective extends to inferring the network structure when the only available data is the timing of events, such as inter-spike intervals of neurons. 1.5 Outline This thesis is divided into five parts, each comprising specific chapters. The first part serves as an introduction to the subjects and the motivation behind the thesis, paving the way for the subsequent discussions. Transitioning from the introduction, Part II begins the core of the thesis, exploring synchronization in networks of Izhikevich neurons. Chapter 2 is dedicated to introducing the Master Stability Function (MSF), the tool we use for analyzing both total synchronized and cluster-synchronized states in networks of Izhikevich neurons. In the third Chapter, building upon the MSF revised in Chapter 2, we present the results of applying the MSF formalism to networks of Izhikevich neurons. We specifically focus on studying the stability of total and cluster synchronized states in small networks of electrically and chemically coupled neurons. Transitioning from the study of synchronization in Part II, Part III is dedicated to the problem of network inference and the utilization of Kalman filters to address this challenge. In Chapter 4, we shift our focus to network inference, introducing the concept of Bayesian filters and, specifically, the Kalman filter. Given our objective of studying nonlinear problems, we explore the Unscented Kalman filter —a version designed to handle nonlinearities— and discuss how these tools can be employed to infer the parameters of a given model. In Chapter 5, we present the results of applying the Unscented Kalman filter (UKF) to infer the parameters of a given model, using the time series data of an Izhikevich neuron. Additionally, we extend our exploration to network inference, investigating small networks of Izhikevich neurons with both electrical and chemical couplings. Closing Part III, Chapter 6 addresses the challenging problem of network inference when only event timing data, such as the spikes of a neuron, is available. In this chapter, we introduce a novel approach by combining the Kalman filter with the Kuramoto model to 1.5. Outline 7 effectively fit the data. The results of this method are presented using interspike interval data from Izhikevich neurons and experimental data from an electronic circuit. Part IV is dedicated to summarizing the conclusions and discussions arising from the results presented in the thesis. Finally, the thesis concludes with some perspectives, presenting future avenues of research. Following the conclusions chapter, four appendices are included. Appendix A complements the theoretical framework used in Chapter 3, while Appendix B covers the mathematical details of Chapter 4. Additionally, Appendices C and D contain supplementary material related to the findings in Chapters 5 and 6. These appendices were placed at the end to ensure the fluidity of the main text and the core chapters of the thesis. Part II Synchronization 8 2 Synchronization of Chaotic Networks 2.1 Stability and Synchronization As posed by Heagy et al., in the context of coupled dynamical systems, the classic synchronization problem can be written as “will my system ever synchronize and, if so, under what conditions?” [63]. During the last decades of the twentieth century, chaotic oscillators were at the center of such a problem. All this interest resulted in many works, two of them being especially important due to the use of variational equations to study the stability of synchronized solutions. Although the idea of analyzing how perturbations grow (or decay) with time was not new to nonlinear systems, Fujisaka and Yamada were the first to use Lyapunov exponents [64, 65] to tackle the problem of the stability of synchronized motion of coupled oscillators [66]. They considered coupling schemes such as global coupling and only nearest neighbor with and without periodic boundary conditions. Starting from a system with N nodes the form ẋi = F(xi) + σ ∑ j Aij(x j − xi) . (2.1) Where xi is them-dimensional vector of dynamical variables of the ith node, and assuming that the isolated dynamics of each node are described by the vector field F : R → Rm. Additionally, σ is the coupling strength and Aij is the adjacency matrix that encodes the network structure, which is defined as Aij = { 1, if there is an edge between nodes i and j 0, otherwise . (2.2) Using that the synchronized solution satisfies ∑ j Aij(x j − xi) = 0, which is now known as global (or total) synchronization, they derived a stability parameter λ′ such that for λ′ < 0 the synchronized state is stable, and unstable against infinitesimal perturbations if λ′ > 0. As we will see in the next sections, the stability of the synchronized solution will depend on the dynamics of the isolated node, the coupling strength, and the network architecture. 9 10 Chapter 2. Synchronization of Chaotic Networks Moreover, the stability parameter turns out to be the Maximum (or Largest) Lyapunov Exponent (MLE) of the dynamical system transverse to the synchronized solution. A similar approach was used by Shnöl a few years later, in the context of periodic oscillators [67]. Beyond the global synchronized state, one can also observe solutions where the system splits itself into groups displaying the same dynamics. The stability of this so-called cluster synchronization was first studied by Kaneko in the context of coupled maps [68] and later by Terry et al., analytically and experimentally with an array of lasers [69]. All these works were essential to the development of formalism known as the Master Stability Function (MSF), derived by Pecora and Carrol in [14]. The MSF generalizes the results of previous works, especially [66], and allows one to calculate the threshold for total synchronization, and the linear stability of such a state. Throughout the years, the MSF was extended to networks of non-identical oscillators, directed networks, networks with noise, and more recently to cluster synchronized states and multilayer networks [70–73]. 2.2 Master Stability Function In this section, we derive the MSF formalism following Pecora et al. [14], using the Rössler oscillator [62] in the hope that it serves as a foundational step. Given that the Rössler oscillator is a classical example of chaotic systems, we can focus on the intricacies of the MSF, leaving the application details to the Izhikevich neuron for the next Chapter. The MSF offers a systematic framework for analyzing synchronized states in networks of coupled oscillators. The revised formalism presented here serves as the foundation for this particular part of the thesis. Let there be N nodes, with xi being them-dimensional vector of dynamical variables of the ith node. The isolated dynamic of each node is described by the vector field F : R → Rm, i.e. ẋi = F(xi). H : Rm → Rm is a function of each node’s variables that is used together with the structure of the network to define the coupling. Now, let x = [x1,x2, ...,xN ]T , F(x) = [F(x1),F(x2), ...,F(xN )]T and H(x) = [H(x1),H(x2), ...,H(xN )]T . Then, in the case of diffusive coupling, the equations of motion are ẋ = F(x)− σL⊗H(x) (2.3) where σ is the coupling strength, ⊗ represents the Kronecker product1 [74], and L is the Laplacian matrix, which is given by L = {Lij}, Lij = (δij ∑ j Aij)−Aij (2.4) and δij is the Kronecker delta. The second term, δij ∑ j Aij is known as the degree matrix, a diagonal matrix with the number of links of each node in the main diagonal. 1The Kronecker product, denoted A ⊗ B, combines matrices A and B to create a larger matrix, where each element of A is multiplied by the entire matrix B. If A is am× n matrix and B is p× q, the result is a pm× qn matrix. 2.2. Master Stability Function 11 As an example, consider a network of N = 2 Rössler oscillators. If they are coupled only by the first variable of each node, like in [14], we have F(xi) =  −yi − zi xi + ayi b+ zi(xi − c)  L = [ 1 −1 −1 1 ] H = 1 0 0 0 0 0 0 0 0  . (2.5) which, using Eq. (2.3), leads to ẋ1 ẏ1 ż1 ẋ2 ẏ2 ż2  =  −y1 − z1 x1 + ay1 b+ z1(x1 − c) −y2 − z2 x2 + ay2 b+ z2(x2 − c)  − σ  x1 − x2 0 0 x2 − x1 0 0  . (2.6) The synchronization manifold xs is defined by the N − 1 constraints x1 = x2 = ... = xN = xs , (2.7) and as a consequence of the Laplacian matrix being a zero row-sum matrix, we have that L⊗H(xs) = 0 , (2.8) which is the second term of Eq. (2.6) when x1 = x2. This yields the synchronized solution ẋs = F(xs). In order to study the stability of the synchronization solution, we can analyze how small perturbations to it evolve, i.e. δxi = xi − xs, where xs = [xs, ys, zs] T . Using linear stability analysis, we have for each node ẋi ≈ F(xs) +DF(x) ∣∣∣ xs δxi − σ ∑ j LijDH(x) ∣∣∣ xs δxj (2.9) hence δẋi = ẋi − ẋs = DF(x) ∣∣∣ xs δxi − σ ∑ j LijDH(x) ∣∣∣ xs δxj (2.10) where D is the Jacobian matrix. Using a N ×N identity matrix IN , we can write an equation for δẋ = [δẋ1, δẋ2, ..., δẋN ]T δẋ = [IN ⊗DF(xs)− σL⊗DH(xs)]δx . (2.11) Note that the coupling term in Eq. (2.11) becomes problematic, as different modes of perturbation, δx, end up being coupled. Therefore, we cannot distinguish perturbations parallel to the synchronized manifold from those orthogonal to it. Ideally, we want to access information about the evolution of perturbation modes in each possible direction separately. To achieve it, let’s first turn our attention to our Laplacian matrix. In the general case of symmetric and undirected coupling, which we will consider in this thesis, L has important properties: 12 Chapter 2. Synchronization of Chaotic Networks • A real non-negative set of eigenvalues (γi ≥ 0) • The associated set of eigenvectors constitutes an orthonormal basis of RN • The smallest eigenvalue is γ1 = 0, which is associated to the eigenvector V1 = ± 1√ N (1, 1, 1, ..., 1)T . (2.12) Importantly, V1 is aligned with the synchronization manifold S , and all the other eigenvalues have associated eigenvectors spanning all the phase space transverse to S [75]. Due to these properties, L has a diagonalization form V −1LV = Γ = diag(γ1, γ2, ..., γN ) (2.13) where V = [V1, V2, ..., VN ] is an orthonormal matrix whose columns are eigenvectors of L, and Γ is a diagonal matrix whose diagonal elements are associated eigenvalues ordered by magnitude. We define a new set of variables η = (V −1 ⊗ Im)δx, so that Eq. (2.11) becomes (V ⊗ Im)η̇ = [IN ⊗DF(xs)− σL⊗DH(xs)](V ⊗ Im)η (2.14) solving for η η̇ = (V −1 ⊗ Im)[IN ⊗DF(xs)− σL⊗DH(xs)](V ⊗ Im)η (2.15) which give us η̇ = [IN ⊗DF(xs)− σΓ⊗DH(xs)]η . (2.16) Equation (2.16) is the celebrated Master Stability Function, a block diagonalized variational equation where each block has the form η̇k = [DF(xs)− σγkDH(xs)]ηk . (2.17) Therefore, we have successfully decoupled the variational equation (2.11), with η1 accounting for perturbations parallel to the motion along the synchronous manifold, and all other variables ηi (for i > 1) representing transverse modes [14]. Let’s return to our simple example of two Rössler oscillators, we have for DF and DH DF(x) ∣∣ xs =  0 −1 −1 1 a 0 zs 0 (xs − c)  , DH(x) ∣∣ xs = 1 0 0 0 0 0 0 0 0  . (2.18) The Laplacian matrix L has eigenvalues γ1 = 0 and γ2 = 2, so that V is V = V −1 = 1√ 2 [ 1 1 1 −1 ] which give us η = ( 1√ 2 [ 1 1 1 −1 ] ⊗ 1 0 0 0 1 0 0 0 1 )δx , (2.19) 2.2. Master Stability Function 13 thus, using that δx = [x− xs]T = [x1 − xs,x2 − xs]T , η = 1√ 2  x1 + x2 − 2xs y1 + y2 − 2ys z1 + z2 − 2zs x1 − x2 y1 − y2 z1 − z2  = 1√ 2 [ x1 + x2 − 2xs x1 − x2 ] = [ η1 η2 ] . (2.20) Plugging the latter on Eq. (2.16) one obtains[ η̇1 η̇2 ] = [ DF O O DF− 2σH ] [ η1 η2 ] , (2.21) whereO is am×m null matrix. This yields η̇1 =  −η1y − η1z η1x + aη1y zsη1x + (xs − c)η1z  (2.22) and η̇2 = −2ση2x − η2y − η2z η2x + aη2y zsη2x + (xs − c)η2z  . (2.23) Putting everything together, we can express η̇1 and η̇2 in a more compact form: η̇1 = (DF)η1 η̇2 = (DF− 2σDH)η2 . (2.24) Notice that the first equation gives us the evolution of perturbations to the uncoupled system, ẋs = F(xs), therefore, it allows us to obtain Lyapunov exponents of the original system. Plus, if σ = 0, the second equation is equivalent to the first one. By calculating the maximum Lyapunov exponent Λ of the second equation we can verify the stability of the synchronization solution. The result is shown in Fig. 2.1, where the Maximum Lyapunov exponent was calculated using the method proposed in [76], using a = 0.2, b = 0.2, and c = 7.0 in the Rössler equations. In general, the synchronous solution is linearly stable if the set of (N − 1)m Lyapunov exponents that correspond to the phase space transverse to S are negative. This reads, Λ(σγi) < 0 . (2.25) So, for the Rössler oscillators here considered, this condition is satisfied for 0.14 < γ2σ < 4.47, and since γ2 = 2, 0.07 < σ < 2.235. (2.26) According to Checco et al., [77], if we denote the synchronization region as Ω, and assuming that there only three eigenvalues, there are three possible forms that it can take: 14 Chapter 2. Synchronization of Chaotic Networks • Ω1 = ∅ • Ω2 = (σγ2,+∞) • Ω3 = ⋃ (σγ2, σγ3) Figure 2.1: Maximum Lyapunov exponent transverse to the synchronization manifold Λmax, as a function of the coupling strength σ and the corresponding eigenvalue γ for N = 2 Rössler oscillators with diffusive coupling. This means that the Lyapunov exponent associated with the synchronized manifold can be positive for all values of coupling, and therefore the synchronized solution is not stable (Ω1 case). It can be positive up to a threshold σ2, and beyond this threshold, it is always negative, yielding a stable synchronized solution (Ω2 case). And lastly, it can be negative only within a range of values delimited by γ2 and γ3, turning positive outside this domain (in the Ω3 case). The latter is illustrated in Fig. 2.1. As important as the phenomenon of total synchronization is that of cluster synchronization, where the network is divided into clusters with the same evolution. Fortunately, as we will see in the next section, the MSF formalism has been extended to study the stability of such states. 2.3 MSF applied to Cluster Synchronization In this section, we will cover the application of the MSF to cluster-synchronized states. To do this, let’s once again consider a network with N nodes, where xi is them-dimensional vector of dynamical variables of the ith node. The local dynamic of each node is described by the vector field F : R → Rm. H : Rm → Rm is a function of each node’s variables that is 2.3. MSF applied to Cluster Synchronization 15 used together with the structure of the network to define the coupling. Hence, the equations of motion are the same as Eq. (2.3): ẋ = F(x)− σL⊗H(x) . (2.27) Consider now a cluster synchronized state withM clusters, with 1 ≤M ≤ N , where Cm is the set of nodes in them-th cluster with synchronous motion sm(t): xi − xj = 0 if (i, j) ∈ Cm → xi = xj = sm . (2.28) Thus, we have to look to small perturbations around the cluster synchronized state, i.e. δx = x− sm. This leads to the following variational equations δẋ = [∑ m E(m) ⊗DF(sm)− σ ∑ m E(m)L⊗DH(sm) ] δx , (2.29) where E(m) is a N -dimensional diagonal matrix such that E (m) ij = { 1, if i ∈ Cm 0, otherwise. (2.30) Note that the case of global synchronization can be seen as a one-cluster state with E1 = IN . Our goal is to decouple the modes associated with the cluster synchronized manifold and the ones transverse to it, just as in the case of total synchronization. Naturally, one may proceed to diagonalize Eq. (2.29). The problem is that now we have 2m 2 matrices to be diagonalized, instead of only one matrix as in Eq. (2.11). As we know, a set of matrices cannot be simultaneously diagonalized unless they commute within each other, fortunately, we can find a transformation that leaves the set of 2m matrices simultaneously block-diagonalized. In the next subsections, we will discuss some of these methods. 2.3.1 External Equitable Partitions The first method we will cover comes from a concept of graph theory known as Equitable Partitions (EP) [78–80]. We start with an extension of this concept, known as External Equitable Partitions (EEP). A key component in this method is the definition of a quotient graph. Given a graph encoded by a laplacian matrix L, we can partition the network intoM clusters. Such a partition is encoded by an indicator matrix Z , such that Z : Zij = 1 if node i is in the clusterMj and Zij = 0 otherwise [81]. Then, we can associate a graph with a number of nodes equal to the number of clusters of the original graph. This new graph, known as the quotient graph, will be encoded by a laplacian matrix Lπ where the elements represent the number of connections between and within clusters. An example of this procedure is shown in Fig. (adapted from oclery Fig. 1), 2Since we end up with m E(m) matrices plus m E(m)L matrices. 16 Chapter 2. Synchronization of Chaotic Networks where A and Aπ are given by L =  2 1 0 1 0 0 0 0 1 2 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 1 1 5 1 1 0 0 0 0 0 1 3 0 1 1 0 0 0 1 0 3 1 1 0 0 0 0 1 1 3 1 0 0 0 0 1 1 1 3  and Lπ =  1 −1 0 0 −3 5 −2 0 0 −1 3 −2 0 0 −2 2  . (2.31) The partition is considered an EEP only if the original and the quotient laplacian matrices obey the following condition: LZ = ZLπ . (2.32) This constraint ensures that each node in clusterMj has the same number of connections with nodes in clusterMi, for all i, j with i ̸= j [82, 83]. Equation (2.32) yields powerful spectral properties. Thanks to it, the laplacian matrices of the original graph and quotient graphs share eigenvectors and eigenvalues. In fact, the properties of the EEP ensure that a subset of eigenvectors, denoted as Vs, of the original Laplacian L is related to the eigenvectors of Lπ by [81, 82] Vs = HV π ∈ RN×M , (2.33) where V π are theM eigenvectors of the quotient laplacian. Thus, the eigenvectors (2.33) define the cluster synchronized manifold, and the cluster synchronized state (CSS). The eigenvectors orthogonal to the CSS are denoted V⊥ ∈ RN×(N−M). These are the eigenmodes that can drive the system out of the CSS. One can obtain an orthogonal matrix of eigenvectors V of L such that V TLV = Γ where Γ = diag(γi) , (2.34) where V = [Vs, V⊥] = [HV π, V⊥] , and V T ⊥ Vs = 0 . (2.35) As in the case of global synchronization, we define a coordinate transformation η = (V T ⊗ IN )δx, which leads to η̇ = [∑ m V TEmV ⊗DF(sm)− σ ∑ m V TE(m)V V TLV ⊗DH(sm) ] η . (2.36) Defining the set of matrices V TEmV = Qm , (2.37) we can write η̇ as η̇ = [∑ m Qm ⊗DF(sm)− σ ∑ m ΓQm ⊗DH(sm) ] η . (2.38) 2.3. MSF applied to Cluster Synchronization 17 The structure of the matrices Qm is given by Qm = V TEmV = [ Qm s 0M×(N−M) 0M×(N−M) Qm ⊥ ] . (2.39) Hence, we have succeeded in decoupling the variational equations for the CS manifold from others transverse to it. 2.3.2 Simultaneous Block Diagonalization The simultaneous block diagonalization (SDB) of a set of matrices is a standard problem in group theory and has applications in field theory and synchronization theory. Due to this, many algorithms were developed aiming to deal with this problem, and here, we focus on the algorithm developed by Zhang et al. [84]. In contrast with the EEP formalism, their SBD framework does not require any information about the possible partitions of the network. Remember that we want to find an orthogonal transformation Θ such the set of symmetric matrices B = [B(1), ...,BM ] can be simultaneously block diagolized by Θ. Then, the algorithm consists of the following steps. • Find the eigenvectors B = ∑M m=1 ξmBm, where ξm are independent random coefficients which can be drawn from Gaussian distribution. SetD = [d1, ...,dN ] • Generate B = ∑M m=1 ξmBm for a new realization of ξm and compute B̃ = DTBD. Mark the indexes i and j as being in the same block if B̃ij ̸= 0 • Set Θ = [dc(1), ...,dc(N)], where c is a permutation of 1, ..., N such that indexes in the same block are sorted consecutively. This algorithm was based on the work by Maehara and Murota [85] and it yields the finest SBD given that there are no degeneracies in the set of eigenvalues. In this context, “finest” is defined by having the maximum number of common blocks, which is also equivalent to the blocks’ sizes being minimal [84]. a) b) Figure 2.2: Example of simultaneous block diagonalization for a network of N = 10 elements. In (a), each color represents a solution, so the cluster state has m = 6 clusters, a cluster with five nodes (magenta) and five other cluster with only one node. As a result, in (b), the block Qm s that represents the perturbations parallel to the synchronized manifold is 6× 6. Adapted from [84]. 18 Chapter 2. Synchronization of Chaotic Networks As we saw in this Chapter, the ceaseless interest in synchronization led to the development of the MSF formalism, which can be used to study the linear stability of synchronized states. In the following Chapter, we will use this framework to study total and cluster synchronized states in networks of Izhikevich neurons. 3 Master Stability Functions for Networks of Izhikevich neurons This chapter presents the results of the research conducted in collaboration with Hilda A. Cerdeira1. The results here presented were summarized in a paper that is currently under review. 3.1 Introduction As commented in the last Chapter 2.1, the mathematical description of synchronization is well established in nonlinear dynamics, and it can be applied to many natural phenomena, including neural activity. With this in mind, in this chapter, we apply the MSF formalism to study the linear stability of synchronized states in networks of Izhikevich neurons. Both global and partial synchronization patterns are considered. To calculate the associated Lyapunov exponents, we use the Saltation Matrices formalism, which takes into account the discontinuities of the Izhikevich model. We study both global and cluster synchronization. First, we consider networks with electrical coupling, followed by the case with chemical coupling, and then with both coupling schemes. For each case, we study how the stability of the synchronized state depends on the coupling strength. We verify that the MSF formalism is effective for probing the threshold for synchronization, except for the last case, where the outcome of the system depends on its initial conditions. This can be the case when the system under study has a riddled basin of attraction [86]. As a result, the system evolves towards a non-synchronized solution even with initial conditions in the neighborhood of the synchronization state [87]. 1Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo, Brasil 19 20 Chapter 3. Master Stability Functions for Networks of Izhikevich neurons 3.2 MSF applied to Izhikevich Model The Izhikevich model (IM) is a two-dimensional neuronal model, celebrated for its biophysical accuracy, and its capacity of displaying many spiking patterns without losing computational efficiency. Letting x = [x, y]T , the model consists of a system of differential equations [3] ẋ = F(x) = [ 0.04x2 + 5x+ 140− y + I a(bx− y) ] (3.1) with the after-spike reset given by if x ≥ 30, then { x→ c y → y + d (3.2) For all calculations, we used a = 0.2, b = 2, c = −56, d = −16 and I = −99, with this choice of parameters, the IM displays chaotic dynamics, as discussed in Nobukawa et al. [88]. As in [3], the variables x and y and all the parameters are dimensionless. The Izhikevich model can be seen as a hybrid model, a continuous-time evolution process interfaced with a logical process, in this case, the resetting function when x crosses the threshold. These discontinuities interfere with the usual straightforward methods [76, 89, 90] to compute the Lyapunov exponents of such a system. Following Bizarri et al. [91] we resort to saltation matrices, which allow us to calculate the Lyapunov exponents associated with the master stability functions of networks of Izhikevich neurons. In this framework, these matrices are used as correction factors at the instants of the discontinuities due to the resetting function, yielding the correct Lyapunov exponents. Between spikes the dynamics of the Izhikevich is smooth, and for a small perturbation δx we have δẋ = DF ∣∣ x δx , (3.3) where DF ∣∣ x is the Jacobian matrix applied to F evaluated at x. If a spike occurs, the after-spike reset (6.2) induces a discontinuity in the flow such that DF ∣∣ x− ̸= DF ∣∣ x+ , where − and + indicates the before and after the reset. Then, the perturbation δx+ after the reset event is given by δx+ = S(DF ∣∣ x−δx −) , (3.4) which allows us to write ˙δx+ = DF ∣∣ x+δx + , (3.5) where S is the saltation matrix, which for Izhikevich neurons is given by S =  ẋ+ ẋ− 0 ẏ+ − ẏ− ẋ− 1  . (3.6) The complete derivation of the procedure can be found at [91, 92]. 3.2. MSF applied to Izhikevich Model 21 Throughout this study, we numerically solve Eq. (3.1) with a backward differentiation formula, which yields a variable integration step, with a maximal step of dt = 0.001. The initial conditions for the simulations, unless stated differently, were picked from a normal distribution centered at a fixed point of Eq. (3.1) in the case of no coupling, (−56.25,−112.5), with a standard deviation equal to 1. 3.2.1 Electrical Coupling First, we study the effect of electrical coupling, which represents electrical synapses between nodes [93]. Consider a network with N nodes, where the isolated dynamics of each node is given by Eq. (3.1). An adjacency matrix A indicates whether node i is diffusively connected through its first variable x to node j. In this case, the dynamics of each node is modified by Wi = −ge N∑ j=1 Aij(x j − xi) (3.7) where ge is the coupling conductance (strength) and Aij is an element of the adjacency matrix. Note that in this case, the coupling can be written as a function of the Laplacian matrix of the network, given by L = D −A where D is the degree matrix defined by Dii = ∑ j Aij , thenWi = ge ∑N j=1 Lijx j . If we define x = [x1, ...,xN ]T , where xi = [xi, yi]T , and F(x) = [F(x1), ...,F(xN )]T , we can write the dynamical system of the network as ẋ = F(x)− ge(L⊗G)x (3.8) where ⊗ is the Kronecker product and G = [ 1 0 0 0 ] (3.9) is the matrix encoding the coupling scheme. The synchronization manifold xs is defined by the N − 1 constraints x1 = x2 = ... = xN , and since the Laplacian matrix is a zero row-sum matrix, (L⊗G)xs = 0, we have ẋs = F(xs). The linear stability of the synchronized solution can be investigated with the MSF of Eq. (3.8) η̇k = [DF ∣∣ xs − geγkDG ∣∣ xs ]ηk , (3.10) where γk is the k-th eigenvalue of L. Thus, the analysis is decomposed in eigenmodes, one being parallel to the synchronization manifold γ0, the others transverse to it. We are interested in the Lyapunov exponents of transverse modes, which tell us whether or not perturbations to the synchronized solution will decay. We highlight that at the event of a spike, the synchronized solution xs encounters a discontinuity, and therefore at this point, we need to apply the saltation matrix Eq. (3.6). 22 Chapter 3. Master Stability Functions for Networks of Izhikevich neurons 0 1 2 3 4 5 geγ −0.6 −0.4 −0.2 0.0 Λ m a x 0.245 0.255 0.265 0.275 −0.02 0.00 0.02 Figure 3.1: Maximum Lyapunov exponent transversal to the synchronization manifold for electrically coupled Izhikevich neurons as a function of gγ. The maximum Lyapunov exponent (MLE) of Eq. (3.10) is presented in Fig. 3.1. The expanded inset shows that Λmax becomes negative for geγ ⪆ 0.2670. To check our results, we simulate a network of N = 4 nodes, as depicted in Fig. 3.2. In such a case, the Laplacian matrix is L =  2 −1 0 −1 −1 2 −1 0 0 −1 2 −1 −1 0 −1 2  (3.11) with eigenvalues 0, 2, 2 and 4. The smallest eigenvalue γ1 = 0 is associated with the synchronized manifold, and the other two are transverse to it. Putting together that the MLE is negative for geγ ⪆ 0.2670, and that 2 is the smallest eigenvalue associated with the transverse mode, we conclude that for ge ⪆ 0.133 the synchronized solution is stable. 1 2 3 4 Figure 3.2: Undirected networkwithN = 4 nodes, which is used to exemplify the stability of complete synchronization for identical Izhikevich neurons. Fig. 3.3 depicts the synchronization error, which is computed as Err = ⟨∑N j |x̄− xj |⟩t where x̄ is the average state for the nodes in the networks, and ⟨·⟩t represents the average on time. We notice a good agreement with the result given by the MSF analysis. 3.2. MSF applied to Izhikevich Model 23 Figure 3.3: Synchronization error for a network of N = 4 Izhikevich neurons (3.2) with electrical coupling. It goes to zero at σ ≈ 0.133, in conformity with the MSF. The solid line represents the average of 100 realizations, the shaded area represents the first and third quartiles, and the grey dashes represent the maximum and minimum values. 3.2.2 Chemical Coupling We now study the synchronization of an Izhikevich network with chemical synaptic coupling between nodes. The chemical synapses that node i receives from every other node j are modeled by the sigmoidal function Bi [94] Bi = gc(x i − vs) N∑ j=1 Aijζ(x j), (3.12) where vs is the reversal potential, ζ(x) = [1 + exp(−ϵ(x− θ))]−1, ϵ defines the slope of the sigmoidal function and θ is the synaptic firing threshold. Througout this study we set vs = 0, ϵ = 7 and θ = 0. This set of parameters allows the chemical synapses to be both excitatory and inhibitory, since vs lies inside the range of oscillation of the action potential xi. This is the case for some classes of neurons, as discussed in [95, 96]. Note that this coupling term cannot be written as a function of the Laplacian matrix, instead we can write it in terms of the adjacency matrix. To obtain an equation analogous to (3.8), we introduce a few objects, which are defined in the Appendix A.1, together with the complete derivation of the equation for the dynamics of the network with chemical coupling, which is given by: ẋ = F(x)− gcT(x)[(A⊗C)(x)] . (3.13) The synchronized solution, x1 = x2 = ... = xN , derived in the Appendix A.1 from Eq. (A.15) to (A.19), takes the form ẋs = F(xs)− gcknT(xs)C(xs) , (3.14) and it exists only if the number of links kn is the same for all nodes [97]. Following the Pecora-Carroll analysis, we obtain the Master Stability Function for this case, and after we 24 Chapter 3. Master Stability Functions for Networks of Izhikevich neurons diagonalize A we have η̇ = {[IN ⊗ (DF ∣∣ xs − gcknDH ∣∣ xsK(xs))]− gcΓ A ⊗T(xs)DC ∣∣ xs}η . (3.15) Where ΓA is a diagonal matrix with eigenvalues of A as entries. The complete derivation of this equation, which is analogous to Eq. (3.10), is given in Appendix A.1 and it was first derived by Checco et al. [97]. As an example, we consider the network of N = 4 nodes, depicted in Fig. 3.2, with an adjacency matrix given by A =  0 1 0 1 1 0 1 0 0 1 0 1 1 0 1 0  (3.16) The adjacency matrix Eq. (3.16) has an eigenvalue γ1 = 2 associated with the synchronized solution, so we need to calculate the MLE for the remaining eigenvalues {0, 0,−2}. Figure 3.4 shows the MLE of Eq. (3.15), which is positive for the range studied. Our calculation for the synchronization error in Fig. (3.5) shows good agreement with the MLE , i.e., we verified that the network does not synchronize with chemical coupling. Figure 3.4: Maximum Lyapunov exponent transversal to the synchronization manifold for N = 4 Izhikevich neurons with chemical coupling. 3.2. MSF applied to Izhikevich Model 25 Figure 3.5: Synchronization error for a network ofN = 4 Izhikevich neurons (3.2) with chemical cou- pling. As expected from the MSF analysis, the system does not synchronize. The solid line represents the average of 100 realizations, the shaded area represents the first and third quartiles and the grey dashes represent the maximum and minimum values 3.2.3 Chemical and Electrical Coupling Finally, we study the effect of both electrical and chemical coupling on synchronization. In this case, the evolution of the network is given by ẋ = F(x)− geL⊗G(x))− gcT(x)(A⊗C)x . (3.17) For the sake of simplicity, we consider that both interactions depend on the same diagonalizable adjacency matrix, i.e., L = D −A, and that every node has the same number of neighbors kn. Combining the results of previous sections, we write the variational equation for the perturbations to the synchronized state as δẋ = {[IN ⊗ (DF ∣∣ xs − gcknDH ∣∣ xsK(xs))]− geL⊗DG ∣∣ xs − gcA⊗T(xs)DC ∣∣ xs}δx . (3.18) Since L and A commute, we can diagonalize both simultaneously, yielding η̇ = {[IN ⊗ (DF ∣∣ xs − gcknDH ∣∣ xsK(xs))]− [geΓ L ⊗DG ∣∣ xs − gcΓ A ⊗T(xs)DC ∣∣ xs ]}η . (3.19) where ΓL and ΓA are diagonal matrices with eigenvalues of L and A respectively as entries. Once again, we take a network of N = 4 neurons, with electrical and chemical coupling given by g = ge = gc, and represented by the adjacency matrix (3.16). Thus, the eigenvalues associated with transverse modes are {γA} = {0, 0, 2} from the adjacency matrix and {γL} = {2, 2, 4}, from the Laplacian matrix. We calculate the MLE of Eq. (3.19) as a function of g, and as shown in Fig. 3.6, the global synchronization pattern becomes linearly stable for g ≈ 0.13. To confirm this result we calculate the synchronization error over 200 simulations, which is shown in Fig. 3.7, and find that it does not go to zero when the MLE becomes negative. 26 Chapter 3. Master Stability Functions for Networks of Izhikevich neurons We notice that the synchronization error only vanishes around g ≈ 0.16, and for values below that, the system can either synchronize or not. To better understand these results, we calculate the synchronization error for different initial conditions, for g = 0.155. While the y-variables are set to −101, we partitioned the network into two distinct clusters according to their x-variables. The range of x-variables spans from 1 to −1, resulting in initial conditions that are either orthogonal, represented as ±[1,−1, 1,−1], or parallel to the synchronized manifold, denoted as ±[1, 1, 1, 1], within the y = −101 plane. The value y = −101 corresponds to an approximate value for y in the range of x we selected. 0.0 0.2 0.4 0.6 0.8 g −0.6 −0.4 −0.2 0.0 Λ m a x 0.130 0.135 0.140 0.0 Figure 3.6: Maximum Lyapunov exponent transversal to the synchronization manifold for N = 4 Izhikevich neurons with electrical and chemical coupling. Figure 3.7: Synchronization error for N = 4 with electrical and chemical coupling. The solid line represents the average of 200 realizations, the shaded area represents the first and third quartiles, and the grey dashes represent the maximum and minimum values. As we see in Fig. 3.8 (a) for g = 0.155, depending on the initial conditions, the system can end up in different solutions, a characteristic of multistable systems [98]. For simplicity, we differentiate synchronized solutions from non-synchronized solutions by setting a threshold at Err= 0.05, which are represented by white and black dots respectively. Moreover, the different solutions are intertwined in a complex way. This type of basin of attraction is often called a riddled basin, and can be the cause for the divergence between the MLE and the 3.2. MSF applied to Izhikevich Model 27 synchronization error results [87]. Figure 3.8 (b) shows an inset of Fig. 3.8 (a), where we see that synchronized solutions are intertwined with non-synchronized in a similar way as in Fig. 3.8 (a) even though the domain is reduced. Besides that, it is usually assumed that the presence of riddled basins of attraction is related to the MLE approaching zero from below, as discussed in [63, 99]. A similar result, where the system fails to exhibit synchronization when the MLE approaches zeros from above was reported by [87] but the presence of the riddled basin was not investigated. -1 -0.5 0 0.5 1 x1 -1 -0.5 0 0.5 1 x 2 0.0 0.05 10 E rr -0.01 0 0.01 x1 -0.01 0 0.01 x 2 0.0 0.05 10 E rr Figure 3.8: Riddled basin of attraction in (x1, x3) plane for g = 0.155, calculated with 10.24 × 104 initial conditions in (a) x1, x2 ∈ [1,−1] and (b) x1, x2 ∈ [0.01,−0.01]. Solutions that reach a final state with a synchronization error Err > 0.05 are marked as white dots; otherwise, they are marked as black. To better understand the basin of attraction, we calculate the uncertainty exponent α, which as defined in [100, 101], measures how small perturbations to initial conditions affect the final state of the system. To do so, we pickM = 1000 pairs of initial conditions with g = 0.155, iterating each to its final state. Each pair is separated by a distance d, i.e., x− x′ = d. We then compare the final states of each initial condition to verify if there are pairs that end up in different states and therefore are uncertain. The fraction of the phase space that is uncertain obeys f(ε) = εα (3.20) with α < 1. We record that the uncertainty exponent is related to the dimension of the phase space D as α = D − δ, where δ is the (capacity) dimension of the basin boundary [101]. We found an uncertainty exponent equal to α ≈ 0.005. As a result, even for extraordinarily small perturbations to the synchronized state, a fraction of them can grow and desynchronize the system. Moreover, using the α = D − δ, we find that the dimension of the basin boundary, δ ≈ 7.995, is close to the dimension of the state space. In conclusion, the obtained uncertainty exponent of α ≈ 0.005 suggests the presence of a riddled basin of attraction [98, 102]. 28 Chapter 3. Master Stability Functions for Networks of Izhikevich neurons Figure 3.9: The fraction of pairs of initial conditions that converge to different basins of attraction, f , as a function of the distance between initial condition ε - Eq, (3.20). For g = 0.1555, the slope of the line that best fits the points yields an uncertainty exponent α = 0.005. 3.3 Izhikevich Model: Partial Synchronization 3.3.1 Electrical Coupling As mentioned in the first section, the electrical coupling can be written in terms of the Laplacian matrix. In this case, the variational equation concerning the linear stability around the cluster synchronized state (CSS) is given by [72] δẋ = [ M∑ m=1 (Em ⊗DF ∣∣ sm − ge(LE m ⊗DG ∣∣ sm ))]δx (3.21) whereM is the number of clusters and Em is an N ×N diagonal indicator matrix for each cluster such that Em ii = 1 if i belongs to clusterm, and it is equal to zero otherwise. If we consider a CSS withM clusters, the dynamical evolution of such state is given by [81, 103] ẋ = (Z ⊗ Iq)ṡ, where (3.22) ṡ = F(s)− ge(L π ⊗ Iq)G(s) (3.23) and Z is the N ×M indicator matrix which encodes the cluster state configuration Z : Zij = 1 if node i is part of cluster Cj and Zij = 0 otherwise. Iq is an q × q identity matrix, where q is the dimension of the isolated dynamical system, q = 2 in our case. The state of each cluster is encoded in theM × q-dimensional vector s. If the quotient network Laplacian Lπ satisfies the condition [82] LZ = ZLπ , (3.24) where Lπ = (ZT )−1ZTLZ = Z+LZ and Z+ is the left Moore-Penrose of pseudoinverse Z [104, 105], then the partition encoded by Z is said to be an Equitable External Partition (EEP) [82]. In this case, the network is divided intoM clusters in such way that the number of connections from nodes in a cluster Ci to nodes in a cluster Cj depends only on i, j with i ̸= j. The quotient network dynamics is a coarse-grained version of the original network, 3.3. Izhikevich Model: Partial Synchronization 29 where each cluster becomes a node and the weights between these nodes are the out-degrees between clusters in the original network [81, 83]. Again, we start our example with a network of Izhikevich neurons coupled through electrical synapses. Consider a network with the following Laplacian matrix L =  2 1 0 1 0 0 0 0 1 2 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 1 1 5 1 1 0 0 0 0 0 1 3 0 1 1 0 0 0 1 0 3 1 1 0 0 0 0 1 1 3 1 0 0 0 0 1 1 1 3  (3.25) and a CSS with indicator matrix Z given byM = 5 cluster, 3 with 2 nodes, and 2 with one: Z =  1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1  . (3.26) One can verify that this choice of L and Z satisfies (3.24) and therefore this cluster configuration is an EEP. A representation of such network is given in Fig. 3.10, where nodes of the same color represent a cluster. Putting this together, Eq. (3.23) reads [ẋ]T = [ṡ1, ṡ1, ṡ3, ṡ4, ṡ5, ṡ5, ṡ7, ṡ7]T . (3.27) Figure 3.10: Network studied in this section, where the clusters are indicated with colors. After block-diagonalizing the variational equation (3.21), we investigated the linear stability of the CSS with the following equation η̇ = [∑ m Qm ⊗DF ∣∣ sm − geΓ LQm ⊗DG ∣∣ sm ] η . (3.28) 30 Chapter 3. Master Stability Functions for Networks of Izhikevich neurons Where Qm = V −1EmV is a block diagonal matrix and ΓL = V −1LV is a diagonal matrix filled with eigenvalues of L. The orthogonal matrix that block-diagonalizes both Em and L can be found using group theory [103], External Equitable Partitions (EEP’s) [81] and Simultaneous Block Diagonalization (SBD) [84] approaches, resulting in the decoupling between modes within the cluster synchronized manifold and transverse to it. To check if all transverse modes are damped, we calculated the maximum Lyapunov exponent of Eq. (3.28), which is given in Fig. 3.11 as a function of ge. Figure 3.11: Maximum Lyapunov exponent transversal to the synchronization manifold for N = 8 Izhikevich neurons with electrical coupling under partial synchronization. Fig. 3.12 shows the synchronization error of the cluster synchronization state as a function of ge. The error is computed as ⟨|xi(t)− xim(t)|⟩, where we take an average both in time and over the neurons i in a clusterm. xim is the average state for the neurons in the cluster to which node i belongs. Fig. 3.11 and Fig. 3.12 show excellent agreement between the stability analysis and the synchronization error: the MLE associated with the transverse modes goes negative around ge ≈ 0.073, and the synchronization error vanishes around the same value. Figure 3.12: Synchronization error for Izhikevich model with electrical coupling, partial synchroniza- tion. 3.3. Izhikevich Model: Partial Synchronization 31 3.3.2 Chemical Coupling We now extend the linear stability study of cluster synchronized states to networks with chemical coupling, which is mediated by the adjacency matrix of the network. Consider a network with N nodes and adjacency matrix A, given a CSS withM clusters encoded by an indicator matrix Z , the coarse-grained dynamics can be written as follows ẋ = (Z ⊗ Iq)ṡ, where (3.29) ṡ = F(s) + geT(s)(Aπ ⊗C)(s) . (3.30) If the adjacency matrix of the quotient network is given by Aπ = Z+AZ , its diagonal entries correspond to self-loops in the quotient network whenever we have connections between nodes in the same cluster, and the partition encoded by Z is an External Partition (EP). For EPs the number of connections from nodes in a cluster Ci to nodes in a cluster Cj depends only on i, j [81]. The variational equation of Eq.(3.29) is δẋ = { M∑ m=1 [Em ⊗ (DF ∣∣ sm − gcPR m)]− [ M∑ m=1 (EmA⊗T(sm))( M∑ m=1 Em ⊗DK ∣∣ sm )] } δx . (3.31) And after block-diagonalizing Eq. (3.31), we obtain η̇ = { M∑ m=1 Qm ⊗ [DF ∣∣ sm − gcGR m]− (QmΓA ⊗T(sm)( M∑ m=1 Qm ⊗DK ∣∣ sm ) } η . (3.32) where we defined G = [ 1 0 0 0 ] and Rm = M∑ n=1 Aπ mnζ(xn) . (3.33) Where ζ is the nonlinear operator defined in Appendix A.1. With these equations in hand, we now consider the network of the previous subsection, depicted in Fig. 3.10, and the same indicator matrix Z (Eq. (3.26)), with chemical coupling between neurons. In this case, the adjacency matrix is given by A =  0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 0 1 1 0 0 0 0 0 1 0 0 1 1 0 0 0 1 0 0 1 1 0 0 0 0 1 1 0 1 0 0 0 0 1 1 1 0  . (3.34) 32 Chapter 3. Master Stability Functions for Networks of Izhikevich neurons Unlike the case with electrical coupling, here the CSS is not stable, as seen in Fig. (3.13), where the maximum Lyapunov exponent is positive in the range of gc considered. This analysis is supported by the synchronization error of this case (Fig. 3.14), which is greater than 0 in the range of gc studied. Figure 3.13: Maximum Lyapunov exponent transversal to the synchronization manifold for N = 8 Izhikevich neurons with chemical coupling partial synchronization (N=8). Figure 3.14: NEW Synchronization error for Izhikevich model with chemical coupling, partial syn- chronization (N=8). 3.3.3 Electrical and Chemical Coupling In the case where the network admits both electrical and chemical coupling, the dynamics of the CSS is governed by ẋ =(Z ⊗ Iq)ṡ, where (3.35) ṡ = F(s)− gcT(s)(Aπ ⊗C)(s)− ge(L π ⊗ Iq)G(s) . (3.36) where Z is the indicator matrix of the CSS state, Aπ and Lπ are the quotient adjacency and Laplacian matrices, and for the sake of simplicity we take gc = ge = g. Putting together the 3.3. Izhikevich Model: Partial Synchronization 33 results of the later subsections, the master stability function of a given CSS state can be written as η̇ = { M∑ m=1 Qm⊗[DF ∣∣ xm−gcGRm]−gc( M∑ m=1 QmΓA⊗T(xm)( M∑ m=1 Qm⊗DK ∣∣ xm)−ge M∑ m=1 ΓLQm⊗DH ∣∣ xm } η . (3.37) We proceed with the analysis of the network in Fig. 3.10, with indicator matrix (3.26). The MLE of Eq. (3.37) is shown in Fig. 3.15, where we notice that the introduction of electrical coupling in the network seems to induce the CSS, since the MLE becomes negative for g > 0.102. Figure 3.15: Maximum Lyapunov exponent transversal to the synchronization manifold for N = 8 Izhikevich neurons with chemical and electric coupling, partial synchronization. The synchronization error is shown in Fig. 3.16, the dotted line represents the median over 100 simulations and the shaded area is the limit betwen first and third quartiles, and the grey lines represent the upper and lower limits of the errors. Although the lower bound of the synchronization error goes to zero around g ≈ 0.125, we see that on average the system does not reach the synchronized solution. While the initial conditions are similiar to the ones used in Fig. 3.12, the introduction of chemical coupling results in a distinct outcome concerning the agreement between the MSF and synchronization error. This sensibility to perturbations is characteristic of multistable systems [98], and as discussed in [87], a riddled basin of attraction can be the cause for such behavior, which in turn may be a consequence of the discontinuity of the differential equation. A thorough understanding of this phenomenon needs intense research which is outside the scope of this work. 34 Chapter 3. Master Stability Functions for Networks of Izhikevich neurons Figure 3.16: Synchronization error as a function of g for Izhikevich model with electrical and chemical coupling. The CSS considered is the one Fig. 3.10. The average over 150 runs, with nearly identical initial conditions, is depicted with its bounds. To verify that, we consider adding a disturbance in only one direction transverse to the synchronized solution, which means perturbing only one cluster. For example, if we take the solution x =  s1 s1 s3 s4 s5 s5 s7 s7  , and add p⊥ = α  0 0 0 0 1 −1 0 0  ⊗ [ 1 1 ] , (3.38) where α is a constant used to control the magnitude of the perturbation p⊥, we are pulling the first cluster out of synchronicity. Figure 3.17 (a) shows how the synchronization error is sensible to initial conditions for g = 0.155. We used the same method as in Fig. 3.8. To calculate Err, we consider initial conditions slightly perturbed as in Eq. (3.38). The initial conditions for x(5,6) = [x(5,6), y(5,6)]T vary from [1, 1] to [−1,−1], which is equivalent to α varying from 1 to −1 in Eq. (3.38). Along the diagonal x5 = x6 the system is always synchronized. In the off-diagonal region, we have both synchronized and non-synchronized solutions. As in Fig. 3.8, we see that the basin is riddled with these two kinds of solutions. The same calculation is depicted in Fig. 3.17 (b), illustrating the riddle basin for g = 0.195. As anticipated from Fig. 3.16, the increased number of black dots signifies a higher proportion of points ending up in the synchronized basin. 3.3. Izhikevich Model: Partial Synchronization 35 -1 0 1 x5 -1 0 1 x 6 0.0 0.05 1.2 E rr -1 0 1 x5 -1 0 1 x 6 0.0 0.05 1.05 E rr Figure 3.17: Riddled basin of attraction in (x5, x6) plane for g = 0.155. (a) is calculated with 10.24× 104 initial conditions in x5, x6 ∈ [1,−1]. (b) same as (a) for g = 0.195. The solutions found in Fig. 3.17 are shown in Fig. 3.18. The solution in panel (a) yields a synchronization error greater than the one in panel (b). We notice that in both cases, there are finite windows of time where the system goes off synchronization, however, the solution in panel (a) seems to display more and longer windows. This type of intermittence is similar to the one discussed in [106], and it is only found in the perturbed cluster formed by nodes 5 and 6, the remaining clusters stay synchronized throughout the simulation. Figure 3.18: Time series of the x-variable from nodes 5 (solid red) and 6 (dashed black) for different initial conditions, both with g = 0.155. (a) Incomplete synchronization Err =, (b) . To confirm the basin riddling, we compute the uncertainty exponent α. Again, we take a pair of initial conditions x,x′ such that the distance between the two is ϵ = |x− x′| and verify if the initial conditions are uncertain or not. For each distance ϵ, we simulate 1000 pairs of initial conditions and save the uncertain fraction f(ϵ). The result is plotted in Fig. 36 Chapter 3. Master Stability Functions for Networks of Izhikevich neurons 3.19, where the fitted line indicates an uncertainty exponent equal to α ≈ 0.007, and thus the basin dimension is equal to d ≈ 15.993. Figure 3.19: The fraction of pairs of initial conditions that converge to different basins of attraction, f , as a function of the distance between initial condition ε - Eq, (3.20). For g = 0.1555, the slope of the line that best fits the points yields an uncertainty exponent α = 0.007. 3.4 Remarks We have calculated the MSF of networks of Izhikevich neurons, considering global and cluster synchronized states with electrical and chemical coupling schemes. To do so, we combined the MSF formalism with the use of saltation matrices, which allow the calculation of Lyapunov exponents of systems with discontinuities such as the Izhikevich model. For the networks studied, we found that the synchronization error exhibits a nice agreement with the MSF analysis when only one type of coupling is considered. Moreover, the addition of electrical coupling to a network with only chemical coupling induces synchronization for both global and cluster-synchronized states. However, the presence of both coupling schemes yielded a riddled basin of attraction. Due to this, even in the range where the MSF is negative and with nearly identical initial conditions concerning the CSS, the system can end up in a non-synchronized state. This mismatch between the results expected by the MSF and the real output of the network reinforces that having a negative MLE for the synchronized state is necessary, but it does not guarantee it. Moreover, we highlight that the MSF relies on knowledge of the network structure, i.e., the adjacency matrix A, such information is not usually available to us in real-world scenarios. Putting this together, the application of the MSF is not straightforward. Motivated by these challenges, especially the lack of information regarding network structure, in the next part of this thesis, we will shift our focus toward addressing the problem of network inference. This shift in focus allows us to attempt to bridge the gap between theoretical models, such as the MSF, and the complexities of applying synchronization analysis to real-world networks. Part III Network Inference 37 4 Kalman filters Before diving into the problem of network inference per se, in this chapter, we introduce the mathematical foundation of the Kalman filter, which will be a central method to deal with the inference problem. 4.1 From Tracking to Network Inference Since its introduction in 1960 by Robert E. Kalman, the Kalman filter has been used in applications such as the tracking of the Apollo space program, the stock market, and global positioning system (GPS) receivers. But what is a filter? And why does someone need to use a filter in the first place? To answer these questions, let us first talk about a ubiquitous characteristic of sensors we use in our day-to-day lives: they are all noisy. Assume that you are measuring the fruits at the grocery store with two scales, with the weight on one scale being 200g and that on the other scale being 220g. If we assume that the scales are equally accurate, you would estimate the weight to be 210g. On the contrary, if you knew that one scale was more accurate than the other, you would still combine both measures, but you would like the final estimate closer to the measure with higher accuracy. So this is the idea behind filtering: combining pieces of information from a system to gain more knowledge than we would get from the isolated pieces. We want to use such filters because our sensors, the scales, the radars, the multimeters, etc., carry uncertainty with their measurements. Hence, to get a better idea of the system we are measuring, we want to combine the measurements with more pieces of information, which can be other measurements or prior information regarding the system, such as a mathematical model that describes it. This idea is closely related to Bayesian probability, which determines what is likely to be true given prior information. Putting it in a simple way, The Kalman filter is a Bayesian filter, an algorithm that combines measurements coming from a noisy sensor with predictions from a mathematical model of the system under study. As pointed out by Zarchan, while the mathematical formulation of the Kalman filter can be 38 4.2. The g-h Filter 39 challenging to grasp, it has been applied even without a profound understanding of its mathematical derivation [107]. In this chapter, we aim to stay in the middle. We do not aim to provide a complete review and derivation of the Kalman filter, but rather, we will describe its main features, with a focus on implementation. With this in mind, we will first discuss the g-h filter, a type of Bayesian filter. Through the use of a tracking example, we will introduce the main concepts of filtering. Next, we will move to the Kalman filter, which closely resembles the g-h filter. Finally, we will comment on the challenges that arise when applying the Kalman filter to nonlinear systems. This will lead us to the Unscented Kalman filter, which was designed specifically to address such issues. Finally, we will comment on how we can use all of this knowledge to address the problem of network inference. 4.2 The g-h Filter The g-h filter can be seen as a simplified version of more robust filters like the Kalman filter. Derived by Shumway and Stoffer [108], it shares many features with these filters. Here we use it to introduce the key concepts of filtering. First, let us introduce some terminology. We refer to the dynamical system that we want to estimate simply as system. The actual value of the system is said to be hidden, as we can not directly observe it, we can only obtain a measurement from it. As we commented before measurements carry some uncertainty with them. To mathematically model the system, we use a process model. If our system of interest is a moving object, the process model can be as simple as distance = velocity× time. We use the process model to make predictions of the system from the current state. To our process model, and therefore to its predictions, we associate a process error, which reflects the errors in the model. Assume that we are tracking a target, which is moving radially away or towards a radar, which is responsible for tracking it. For simplicity, we consider that this problem is one-dimensional and that the target’s velocity is constant. Hence, we can propose the following process model for our system: xk+1 = [ positionk velocityk ] = [ xk + T ẋk ẋk ] (4.1) where x denotes the position of the target and ẋ its velocity at time k and T is the time between measurements of the radar. So, given a measurement at time k, we can make a prediction for time k + 1 using our process model Eq. (4.1). Our goal is to combine our prediction with the measurement at time k + 1, which will yield a better estimate of the system state. The measurement and the prediction are the pieces of information we have, as in the scale example, our estimate should lie between the two. Ideally, we would like to know if our predictions are more or less precise than our measurements. Heuristically, we want our equation to be estimate = prediction+ g(measurement− prediction) (4.2) 40 Chapter 4. Kalman filters where the difference between the measurement and the prediction is known as residual. g serves as the scaling factor in our estimation process, commonly referred to as the gain. The point of including it in our equation is to regulate the influence of predictions and measurements on final our estimation. Setting g = 0 results in the exclusion of