Dissertação de Mestrado IFT–D.006/24 Krylov Complexity in a Toy Model for an AdS Black Hole and Its Radiation Eric Lobato Graef Orientador Prof. Horatiu Stefan Nastase Maio de 2024 Graef, Eric Lobato S237 Krylov complexity in a toy model for an AdS black hole and its radiation / Eric Lobato Graef. – São Paulo, 2024 59 f: il. color. Dissertação (mestrado) – Universidade Estadual Paulista (Unesp), Instituto de Física Teórica (IFT), São Paulo Orientador: Horatiu Stefan Nastase 1. Teoria quântica. 2. Buracos negros (Astronomia). 3. Holografia. I. Título Sistema de geração automática de fichas catalográficas da Unesp. Biblioteca do Instituto de Física Teórica (IFT), São Paulo. Dados fornecidos pelo autor(a). KRYLOV COMPLEXITY IN A TOY MODEL FOR AN ADS BLACK HOLE AND ITS RADIATION Dissertação de Mestrado apresentada ao Instituto de Física Teórica do Câmpus de São Paulo, da Universidade Estadual Paulista “Júlio de Mesquita Filho”, como parte dos requisitos para obtenção do título Mestre em Física, Especialidade Física Teórica. Comissão Examinadora: Prof. Dr. HORATIU STEFAN NASTASE (Orientador) Instituto de Física Teórica/UNESP Prof. Dr. JEFF MURUGAN University of Cape Town Prof. Dr. DARIO ROSA Instituto de Física Teórica/UNESP Conceito: Aprovado São Paulo, 28 de maio de 2024. À minha irmã i Agradecimentos Agradeço à minha famı́lia por todo o carinho. Aos meus pais, por me apoiarem de tantas maneiras diferentes e por estarem sempre presentes mesmo estando longe. À minha irmã, por ser um exemplo para mim e por ter me incentivado e me ajudado muito quando eu decidi estudar fı́sica. Agradeço ao meu orientador, Horatiu Nastase, pela confiança que demonstra em mim, por ser muito compreensivo e pela paciencia para me explicar tantas coisas. Agradeço também por ter me incluido em um projeto de pesquisa tão interessante. Ao professor Gastão Krein, por tudo que me ensinou sobre mecânica quântica e por todo o apoio que me deu desde que entrei no IFT. Aos amigos que fiz no IFT e que me acolheram tão bem, agradeço por estarem sempre dispostos a me ajudar com a meus estudos, mas, principalmente, agradeço pela amizade e pela ótima companhia tanto dentro quanto fora do instituto. Aos meus amigos de longa data, agradeço por todo o apoio e companheirismo, e por manterem a proximidade mesmo após longos perı́odos de separação. Acknowledgements I’d like to thank our collaborators, Jeff Murugan, Jaco van Zyl and Cameron Beetar, for the confidence, for being so friendly and for all of our discussions. ii “Matter and energy had ended and with it space and time. (...) All other questions had been answered, and until this last question was answered also, AC might not release his consciousness. All collected data had come to a final end. Nothing was left to be collected. But all collected data had yet to be completely correlated and put together in all possible relationships. A timeless interval was spent in doing that. And it came to pass that AC learned how to reverse the direction of entropy. But there was now no man to whom AC might give the answer of the last question. No matter. The answer – by demonstration – would take care of that, too. For another timeless interval, AC thought how best to do this. Carefully, AC organized the program. The consciousness of AC encompassed all of what had once been a Universe and brooded over what was now Chaos. Step by step, it must be done. And AC said, ‘LET THERE BE LIGHT’! And there was light –” Isaac Asimov, The Last Question iii Resumo Nessa dissertação, nós brevemente revisamos aspectos essenciais da complexi- dade quântica computacional e da complexidade de Krylov e introduzimos um modelo quântico simplificado para um buraco negro em AdS interagindo com a sua radiação de Hawking. Com o objetivo de estudar a complexidade de Krylov nesse sistema, nós calculamos equações de Dyson-Schwinger para o modelo em tempo imaginário e estabelecemos a base para o cálculo perturbativo de uma função de dois pontos a partir da qual os coeficientes de Lanczos e a complexidade de Krylov poderão ser extraı́dos. Palavras Chaves: Complexidade Quântica Computacional; Complexidade de Krylov; Holografia; Buracos Negros; Equações de Dyson-Schwinger. Áreas do conhecimento: Mecânica Quântica; Informação Quântica; Holografia. iv Abstract In this dissertation, we briefly review essential aspects of quantum computa- tional complexity and Krylov complexity and introduce a quantum mechanical toy model for an AdS black hole interacting with its Hawking radiation. With the goal of studying Krylov complexity in this system, we calculate Dyson-Schwinger equations for the toy model in imaginary time and lay the groundwork for the per- turbative computation of a two-point function from which the Lanczos coefficients and Krylov complexity can be extracted. Keywords: Quantum Computational Complexity; Krylov Complexity; Hologra- phy; Black Holes; Dyson-Schwinger Equations. Knowledge Areas: Quantum Mechanics; Quantum Information; Holography. v Contents 1 Introduction 1 2 Quantum Computational Complexity 3 2.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Circuit Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Geometric Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Complexity in Time Evolution 9 3.1 Thermalization in Quantum Mechanics . . . . . . . . . . . . . . . . 9 3.2 Scrambling and Operator Growth . . . . . . . . . . . . . . . . . . . . 10 3.3 Complexity and Thermalization in k-Local Fast Scramblers . . . . . 11 3.4 Chaos and Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4 Krylov Complexity 18 4.1 Spread Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2 Krylov Complexity in the Space of Operators . . . . . . . . . . . . . 21 4.3 Finite Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.4 The Moment Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.5 A Bound on Operator Growth . . . . . . . . . . . . . . . . . . . . . . 26 5 The Toy Model 28 6 Dyson-Schwinger Equations for the Toy Model 32 6.1 From Action to Complexity . . . . . . . . . . . . . . . . . . . . . . . 32 6.2 Algebraic Derivation of a Dyson-Schwinger Equation . . . . . . . . 34 6.3 Diagrammatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 7 Conclusion and Perspectives 47 vi Appendices 47 A Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 B Shannon Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 C Extracting a Two-Point Correlator From Its Time-Ordered Counterpart 52 D Functional Derivative of the Euclidean Action With Respect to X . . 54 D.1 Functional Derivative of the Free Part . . . . . . . . . . . . . 54 D.2 Functional Derivative of the Interacting Part . . . . . . . . . 55 Bibliography 56 vii Chapter 1 Introduction In recent years, a lot of research work has been done in an area which brings to- gether quantum information and holography, presenting perspectives of a deeper understanding of spacetime through the study of quantum phenomena under the lens of quantum information theory. The advent of holographic dualities, starting with Maldacena’s 1998 paper [1], was a great advancement in theoretical physics, allowing for the study of correspondences between mathematical objects in quantum gauge theories in d dimensions and their dual counterparts, gravitational theories in d + 1 dimen- sions. Not long after that, studies began to establish the relevance of quantum information in this context, more specifically of entanglement entropy. Ideas arose about an intimate connection between entanglement and spacetime geometry [2, 3, 4, 5, 6, 7], even evoking the possibility of viewing classical spacetime as emergent from entanglement between quantum degrees of freedom (DOF’s). One can mention, for instance, the Ryu-Takayanagi formula [3], found in 2006, which relates quantum entanglement entropy with the area of minimal surfaces in the gravitational side of holographic dualities, an area law analogous to the Hawking-Bekenstein black hole entropy [8]. Another notable result is the “ER = EPR” conjecture [6], which draws an equivalence between Einstein-Rosen bridges connecting pairs of black holes and EPR-like entanglement between the internal DOF’s of the black holes. Eventually, however, it became apparent that another concept in quantum information was a more adequate candidate for this type of correspondence, namely, quantum computational complexity. This idea was initially proposed by Susskind and generated a series of studies attempting to better understand the details of such relation [9, 10, 11, 12, 13, 14, 15, 16, 17]. This was a very important development which also promoted the study of said concept in many- body quantum mechanics, allowing for a better understanding of chaos and thermalization in terms of it. Complexity was previously regarded as a notion belonging in computer science, but it has become very relevant in theoretical 1 Chapter 1. Introduction 2 physics, not only in what concerns holography, black holes and quantum gravity, but for many-body dynamics as well. From the beginning, black holes have been central to the study of quantum information in holography. Evidence suggests that they are the most chaotic systems in nature (or, equivalently, the fastest scramblers) [18, 13, 19]. This prop- erty is supposedly associated to quantum interactions between their internal DOF’s, manifesting in their classical gravitational description via the holographic spacetime-complexity dualities. Such a paradigm includes the idea of conservation of information (or unitary evolution): all the information that enters a black hole is preserved and scrambled among its DOF’s, but also shared by the Hawking radiation emitted by it. In Anti-de Sitter (AdS) space with reflective boundary conditions, black holes are known to reach thermal equilibrium with their Hawking radiation [20]. Al- though the behavior of information in the internal DOF’s of AdS black holes has been studied previously, the regime of thermal equilibrium with the radiation is largely unexplored. The general objective of this project is to make progress in that direction. Therefore, the initial goals we set out to achieve are to find a quantum model which can adequately represent the system and then analyze the growth of Krylov complexity in it. The way we’re approaching this problem is heavily inspired by the work exposed in [21], where Krylov complexity growth was analyzed in the IP matrix model. In [22], Maldacena reviews the simplest quantum model giving rise to an holographic dual spacetime governed by Einstein’s equations. This emergent gravitational description contains a black hole in an AdS “box universe”. The model we are studying is a further simplification of that one, but with an additional modification introduced with the intention of including Hawking radiation in the quantum picture. This text is divided mainly in two parts. Chapters 2 to 4 provide a brief review focused on computation complexity and related concepts, while the remaining ones regard our research project. Appendix A displays some of the conventions employed throughout the text so that they can be readily consulted upon confu- sion. Chapter 2 Quantum Computational Complexity The notion of computational complexity was first defined in computer science as a measure of the minimum amount of operations necessary for solving a computational problem or of the amount of operations executed by an algorithm [23]. In very general terms, it is a measure of information processing. When adapted to the formalism of quantum mechanics, the most direct ap- plication of this concept is in the study of algorithms in quantum computation. However, it went on to become relevant for research in more fundamental ar- eas of theoretical physics, having been adopted by researchers working with quantum gravity and holography, but also by those concerned with chaos and thermalization in many-body quantum mechanics. The general idea of complexity can take many forms depending on the context. Although it can be related to them, Krylov complexity (chapter 4) is a different conception from the other two kinds presented in chapter 2: circuit (or discrete) complexity and geometric (or continuous, or Nielsen’s) complexity. These last two are the ones typically employed in studies in quantum computation and the latter is a generalization of the former, so the general definitions, given in section 2.1, are valid for both cases. General information about quantum computation can be found in Nielsen’s book [23]. A good review on circuit complexity, geometric complexity and ap- plications involving holography is given on [24]. Two instructive studies on the geometric complexity of systems with one and two qubits are given in [25, 26]. 2.1 Basic Concepts The idea of computations or of logic operations is translated to quantum mechanics in the form of the action of unitary operators on quantum states. So a given initial state |ψ0⟩ contains information which is processed by the action of an unitary Û representing some algorithm, and the result of this operation is the final 3 Chapter 2. Quantum Computational Complexity 4 state |ψt⟩: |ψt⟩ = Û |ψ0⟩ . (2.1) Usually, when talking about complexity, the initial state is called reference state and the final state is called target state. Complexity can be defined for operators or for states. The complexity of an operator is conceptually analogous to the complexity of an algorithm in classical computation and the complexity of a state (meaning the complexity of a target state relative to the chosen reference state), is analogous to the complexity of a computational problem. State complexity is a more involved concept than operator complexity. It’s defined as the complexity of the least complex operator that can generate this state from the reference state. So not only it is a relative notion (relative to the chosen reference state), but it also involves a process of minimization (finding the operator with least complexity): C(|ψt⟩) ≡ min Û∈Ω C(Û), Ω ≡ {Û ∈ U(N) : Û |ψ0⟩ = |ψt⟩}; (2.2) where C denotes complexity and U(N) denotes the group of unitary operators in N dimensions, which is the number of dimensions of the Hilbert space1. In order to define the complexity of an operator Û, one must pick an universal basis and a cost function F . What exactly this basis is depends on whether one is referring to circuit complexity or geometric complexity, but either way, the basis contains mathematical objects which in some way are used to build Û. On the other hand, the cost function F determines how each part of the decomposition of Û in the selected basis contributes to the complexity. 2.2 Circuit Complexity Independently of how quantum computers are realized experimentally, quan- tum algorithms are studied theoretically in terms of systems constituted by sets of qubits, which are vectors in a two dimensional Hilbert space. Additionally, computations are constituted by the action of discrete unitary operators on these qubits. A sequence of operations in a set of qubits constitutes a quantum circuit, and the unitary operators employed are often called gates. 1In the case of N-qubit systems, the operators are further restricted to belong to SU(2N). Chapter 2. Quantum Computational Complexity 5 It’s important to realize that the Hilbert space of states contains an infinite amount of possible states, independently of the number of qubits. Of course, if one is allowed to use any unitary operator, it is no problem at all to reach any desired target state starting from any reference state. However, in this context, one studies circuits constructed by successive action of discrete operations contained in a finite predefined set. These building blocks are called elementary gates and the circuits must contain a finite number of them. In the language of group theory, a circuit is built from a finite number of transformations belonging to a group determined by the elementary gates. These types of circuits cannot be used to reproduce with infinite precision the action of all possible unitary operators acting on the Hilbert space. However, it has been proven that any unitary (acting on any number of qubits) can be built, up to arbitrary precision, from a small set of elementary gates acting on only one or two qubits at a time [24, 23]. A set like this is called a universal set and “up to arbitrary precision” means that a certain tolerance δ > 0 must be admitted. With all of this out of the way, it’s possible to define complexity. As mentioned in section 2.1, a universal basis must be picked as well as a cost function. In the discrete case, the universal basis {Ûk} corresponds to the universal set of gates chosen to build the circuit. Then, in its simplest form, the complexity of Û is just the number of elementary gates employed in its construction. This is equivalent to choosing a cost function which establishes that all elementary gates contribute equally and that they contribute with 1 for each time they are employed: C(Û) = ∑ k λkn(Ûk), λk = 1 ∀ k; (2.3) where n(Ûk) denotes the number of times the elementary gate Ûk is used in the construction of Û. Other cost functions can be picked, for instance, simply by changing the λ’s, but also in other ways. These correspond to different definitions of circuit complexity. One must not fail to notice that the complexity of quantum circuits is not only dependent on the choice of universal set of elementary gates and of the cost function, but also on the tolerance δ admitted. This isn’t necessarily a problem, since complexity in computation is usually applied in practice to measure how much computational resources are necessary for certain objectives. Therefore, the definitions can be adapted to each situation. Also, absolute comparisons useful on a more theoretical level can be made if one considers trends. For instance, Chapter 2. Quantum Computational Complexity 6 how much the complexity of a problem grows as the number of input variables increases. This is common both in classical and in quantum computation. 2.3 Geometric Complexity If one knows two circuits that take a reference state to the same target state, it’s easy to compare the complexity of the two unitary operators and find which is smaller. However, it is not a trivial task to try to find which would be the smallest possible unitary to perform that same task. In order to find a solution to this question, Nielsen came up with a way of generalizing the previous definition of complexity based on discrete operations to one that contemplates the continuous evolution of quantum systems. With the new definitions, he managed to turn this problem into a differential geometry problem [27]. This is the so-called Nielsen’s approach. Once one is no longer restricted to a finite number of discrete operations, the tolerance required previously in the comparison of quantum states is no longer necessary, because any state can be accessed exactly. Going from discrete to continuous evolution amounts to adopting a Lie group. Hence, what is meant by continuous unitary transformations is: objects constructed by exponentiation of Hermitian generators picked from a universal set of allowed “elementary generators” Âj. In this context, a universal set of elementary generators is a set of generators from which any unitary operator Û can be built in the following way: Û(α1, α2, · · · ) = lim n→∞ ( Î + 1 n iαj Âj )n = eiαj Âj , αj ∈ R; (2.4) where Û is given as a function of the coefficients α. The set of generators amounts to a basis that spans the Lie algebra of the U(N) group (where N is the dimension of the Hilbert space). However, in order to be able to compare all the possible unitary operators which perform the same task, one must consider the general case of a sequence of distinct infinitesimal unitary transformations. Therefore, the unitary operator Û(τ), which takes the reference state |ψ0⟩ into the target state |ψt⟩ is considered to depend on a parameter τ and built in a way that’s perfectly analogous to the construction of the time evolution operator in quantum mechanics: Chapter 2. Quantum Computational Complexity 7 Û |ψ0⟩ = |ψt⟩ , Û(1) ≡ Û, Û(0) ≡ Î, (2.5) Û(τ) = P exp ( i ∫ τ 0 dτ′Ĥ(τ′) ) ; (2.6) where P denotes a path-ordered exponential (which is the same as a time-ordered exponential) and the “Hamiltonian” Ĥ(τ) can be expanded in terms of the ele- mentary generators Âj: Ĥ(τ) = αj(τ)Âj. (2.7) Finally, complexity C is defined in terms of a cost function F , which determines exactly how each of the coefficients αj(τ) contribute: C(Û) = ∫ 1 0 dτF ( Û(τ), dÛ(τ) dτ ) . (2.8) This equation shows that the complexity of Û is given by a functional integral over a path parametrized by τ in the space of unitary operators. With these definitions, the task of finding the unitary with the smallest com- plexity among all those that execute a certain task becomes the task of minimizing the complexity functional by finding the optimal path between two-points in the space of unitaries. Additionally, Nielsen argued that a reasonable cost function should satisfy the following properties (here U denotes position in the space of unitary operators and v denotes a velocity vector tangent to the trajectory): • Continuity. • Positivity: F (U, v) ≥ 0, F (U, v) = 0 ⇐⇒ v = 0. (2.9) • Positive homogeneity: F (U, λv) = λF (U, v); λ ∈ R, λ ≥ 0. (2.10) • Triangle inequality: F (U, v1 + v2) ≤ F (U, v1) +F (U, v2). (2.11) Chapter 2. Quantum Computational Complexity 8 If one assumes that F is not only continuous, but also smooth, then it has all the properties that characterize the length functional of a Finsler manifold2, which provide a generalization of Riemannian manifolds. This is the core idea in Nielsen’s approach. One can minimize the complexity of Û by solving a tractable problem in differential geometry: finding the path of minimal length in a Finsler manifold with a length functional defined by the cost function F . It’s worth highlighting that, like in the discrete case, geometric complexity depends on the choice of F and on the choice of the basis of universal generators. 2The right side of eq. 2.8 actually corresponds to a length functional and F to the length of an infinitesimal piece of curve. The choice of F is associated to the choice of a metric. Chapter 3 Complexity in Time Evolution 3.1 Thermalization in Quantum Mechanics In order to understand what makes quantum complexity an interesting concept in areas of physics other than quantum computing, it’s necessary to consider thermalization. There are different ways to describe this concept, which emphasize different ideas. The goal here isn’t to discuss the intricacies of different possible definitions, but to give a general notion of the physical meaning of the process under the different lenses that appear in the literature. The original idea of thermalization is the foundation on which statistical me- chanics and thermodynamics rely on: the idea that, sometimes, when dealing with a physical system with a large number of DOF’s, many physical properties don’t depend on the details of the state of the system in terms of each of its DOF’s, but only on a statistical description of the system as a whole. Systems behaving like this are said to have reached thermodinamic equilibrium or, equivalently, to have thermalized. In quantum mechanics, this translates to saying that, in a system in a thermal- ized pure state |ψ(t)⟩, the expectation values of a set of operators {Ôj} tend to not vary much in time and to remain close to the expectation values of these same operators calculated in some thermal ensemble. In other words, in regards to these operators, the state of the system resembles a mixed state described in terms of a density matrix with an associated temperature: ⟨ψ(t)| Ôj |ψ(t)⟩ ∼ ⟨Ôj⟩β = Tr ( ρ̂βÔj ) , (3.1) where ρ̂β denotes the density matrix with inverse temperature β. More specifically, this description refers to the so-called strong thermalization. A review on this defi- nition and several other topics related to thermalization in many-body quantum mechanics is given in [28]. 9 Chapter 3. Complexity in Time Evolution 10 3.2 Scrambling and Operator Growth One can also think of thermalization in terms of quantum information. A thermal density matrix is characterized by maximal von Neumann entropy, or entanglement entropy. On the other hand, in a thermalized pure state, the DOF’s of the system are maximally entangled, i.e., the entanglement entropy of the density matrices describing small partitions of the system are maximized (they are thermal density matrices). Therefore, the process of thermalization involves the spreading of information among the DOF’s of the system. Consider, for instance, a system containing N in- teracting DOF’s and with an initial state |ψ0⟩ in which these DOF’s are completely unentangled. As time passes and the state thermalizes, information regarding any DOF is spread throughout the whole system due to interactions. Once entan- glement is maximized, recovering any information about the initial state is only possible if one studies an amount of DOF’s of the same order of magnitude as N. This process is called scrambling and the thermalized system is called scrambled [18]. The speed at which this occurs is determined by the form of the interactions present in the system, and, therefore, by its Hamiltonian. Systems with k-local Hamiltonians, where k can be much smaller than the total number of DOF’s, are typically fast scramblers [29, 30]. A k-local system is one which isn’t local in the typical sense of spatial locality, since every DOF can interact with any other DOF, but it’s still local in the sense that interactions only happen between k DOF’s at once; this restricts the spread of information to some degree. Figure 3.1: Illustration of the spread of information in a 2-local random circuit. The qubits are represented by horizontal lines and interactions by vertical wiggly lines. Evolution occurs from left to right and the red color highlights the spread of information from one qubit to all others. The defining characteristic of fast scramblers is the fact that their scrambling time is way smaller than that of systems with Hamiltonians which are spatially Chapter 3. Complexity in Time Evolution 11 local and don’t allow for interactions between all DOF’s. More specifically, the scrambling time of fast scramblers is logarithmic in the number of DOF’s1 [18, 29, 19]; these systems are particularly relevant due to their relation to chaos and black holes. On the other hand, systems where the DOF’s are arranged periodically in d spatial dimensions and which only have interactions between DOF’s close to each other in spacetime, have scrambling times with a lower bound that grows linearly with N 2 d , where N is again the number of DOF’s [18, 19]. Just like it can be useful to think of thermalization in terms of operators instead of states, the same is true for scrambling. What happens with Heisenberg picture operators during scrambling is that they go from “simple” or “local” to “complex” or “non-local” operators. Once more, the word local isn’t being used in a space- time sense, but in regards to the size of the subspace in which the operator acts: operators which initially act only on one of a few DOF’s evolve during scrambling into operators which act on a much larger amount of DOF’s or, equivalently, over a larger subspace of the Hilbert space [31, 32]. This is called operator growth. 3.3 Complexity and Thermalization in k-Local Fast Scramblers Let’s turn our attention to the growth of complexity in the time evolution of k-local fast scramblers and compare it to thermalization. Additionally, some comparisons to the complexity and entropy of classical systems shall provide interesting observations. The following analysis is based on similar ones found in the literature and closely related to a conjecture called “the second law of quantum complexity”, which won’t be explained in detail [33, 29, 30, 24]. Consider a system of N classical bits. The associated space of states contains 2N possible states, corresponding to either 0 or 1 for each bit. If one imagines the system in an ensemble with an associated probability distribution, then the maximum possible Shannon entropy2 (Smax) occurs when all possible states have the same probability. That gives: Smax = − 2N ∑ n=1 pn log pn = −2N 1 2N log 1 2N = N log 2, (3.2) 1A simple argument for this is given in section 3.3. 2This concept is explained briefly in appendix B. Chapter 3. Complexity in Time Evolution 12 where, pn indicates the probability of each state in the ensemble. Now, to calculate the maximum possible complexity (Cmax), we must define a basis of elementary gates, a reference state and a cost function. This is a trivial case in which any state can be accessed by consecutive applications of one simple gate which flips one bit. To keep it simple, let’s define the complexity as just the total number of gates in the circuit. Then, Cmax is the same for any reference state and it’s equal to N; the state of maximum complexity is reached by flipping all the bits. Summarizing the results, we have: Smax = N log 2, Cmax = N; (3.3) both linear in N. In the quantum case, the story is very different. First of all, the Hilbert space is 2N dimensional and the possible states are given by: ∑ j αj |j⟩ , αj ∈ C; (3.4) where {|j⟩} is a basis for the space and each vector of the basis represents one of the possible configurations of a system of N classical bits. The first easy observation is that the space of quantum states is way bigger than the space of classical states, and the vastness of it is quite remarkable. This aspect of quantum mechanics was first pointed out by Feynman in [34] and it is very relevant in the study of complexity. Note also that each pure quantum state can be associated with one of the possible probability distributions representing the whole classical ensemble presented earlier. Now imagine that we’re building a model for a physical system. Ultimately, we’re interested in the complexity of time evolution of fast scramblers, so let’s consider that the time evolution of this system can be modeled by a 2-local random quantum circuit, and then let’s look at the complexity of this circuit. For simplicity, we’ll adopt the simple definition of complexity as total number of gates. What is meant here by a 2-local random circuit is a circuit built by application of gates randomly selected from a set of elementary gates which act on random pairs of qubits (figure 3.1). Imagine this circuit divided in steps, each containing N/2 commuting gates. At each step, each qubit interacts with one other randomly Chapter 3. Complexity in Time Evolution 13 selected qubit by the action of a randomly selected gate [29]. The number of steps required in the application of a circuit is generally re- garded as the depth of the circuit. It follows that, for the circuit in question, the total number of gates (the complexity) as a function of the depth τ is: C(τ) = Nτ 2 . (3.5) In order to analyze more deeply the behavior of complexity in this system, a counting argument will be employed, which relies on the estimation of the number of operators contained in the SU(2N) group3. Naturally, a Lie group contains an infinite number of elements, however, as explained in section 2.2, when dealing with discrete circuits, one must allow for a certain tolerance δ in the comparison of operators. In the present case, this amounts to dividing the SU(2N) group in many small pieces with size depending on δ and considering all the operators inside each piece as equivalent. The number of possible operators is then given by the ratio of the volume of the SU(2N) group to the volume of each piece. This won’t be demonstrated here, but the volume of SU(M) is: VSU(M) = 2π (M+2)(M−1) 2 2!3! · · · (M− 1)! , (3.6) as given in [33]. For simplicity, consider each of the pieces of the SU(M) group to have the volume of a hyper-sphere of radius δ, living in an (M2 − 1) dimensional euclidean space4. If we set (M2 − 1) even, this volume is [33, 24]: VS = π(M2−1)/2 ((M2 − 1)/2)! δ(M2−1). (3.7) The desired ratio is then given by: VSU(M) VS = ( π M−1 2 δM2−1 ) (M2 − 1)! 2!3! · · · (M− 1)! . (3.8) Using Stirling’s formula [36] and keeping only the leading order in M, one can show that: ln (VSU(M) VS ) ∼ M2 ( ln M + ln 1 δ ) . (3.9) 3This is the group of unitary transformations acting in a Hilbert space of N qubits. 4(M2 − 1) is the number of dimensions of the SU(M) group [35]. Also, it makes sense to consider spheres in euclidean space if we’re interested in small δ. Chapter 3. Complexity in Time Evolution 14 Therefore, with M = 2N, we obtain the estimate for the number of possible operators in the discretized SU(2N) group with big N: exp [ 4N ( N ln 2 + ln 1 δ )] = ( 2N δ )4N . (3.10) Now, when analyzing circuits up to a certain depth, if it’s assumed that the circuit generating time evolution is the least complex one leading to the target state, then the complexity of the target state grows linearly with τ. This assumption is quite reasonable if τ is relatively small. However, as τ gets bigger, the chance that the random circuit stops being the most efficient also increases (figure 3.2). Imagine that at a certain point, τ reaches a value so big that the number of possible circuits with this depth is greater than the total number of distinct operators in the group, that necessarily means that some of these possible circuits constitute the same unitary. Therefore, it’s safe to assume that complexity grows linearly for some time and that it stops growing after a certain point. Figure 3.2: As the random circuit increases, there’s also an increase in the chance of the correspond- ing trajectory in the space of operators (black line) becoming more complex than another more efficient trajectory (blue line). The grey circle amplifies part of the line revealing the discreteness of the circuit. Once it stops growing, complexity reaches a plateau. We can estimate the value of this plateau and the value of τ required to reach it by calculating the number of possible unitaries with a certain depth and then equating it to the number of distinct operators in the group. There are N! (N/2)! ways of pairing the N qubits at each step of the circuit. If the gates are selected from a set of p elementary gates, then the number of distinct possibilities for each step is:( N! (N/2)! ) pN/2 ∼ (Np)N/2, (3.11) Chapter 3. Complexity in Time Evolution 15 where the relation is obtained by using Stirling’s formula and assuming large N. Therefore, with τ steps, the number of possible circuits is exponential in N: (Np) Nτ 2 , (3.12) where we recognize the expression for complexity as a function of τ from eq. 3.5. Also, using the properties of the log, one can show that: ln [ (Np) Nτ 2 ] = Nτ 2 ln(Np). (3.13) With a simple comparison between eqs. 3.10 and 3.13, we finally obtain estimates for the complexity of the plateau (Cmax) and for the number of steps required to reach it (τmax): Nτ 2 ln(Np) = 4N ( N ln 2 + ln 1 δ ) ∴ Cmax = Nτmax 2 = 4N ( N ln 2 + ln 1 δ ) ln(Np) . (3.14) At large N, the exponential factor dominates the behavior of the function, so we can just regard both Cmax and τmax as exponential in N: Cmax ∼ τmax ∼ 4N. (3.15) This result is very different from the one found in the classical case (eq. 3.3), in which the maximum complexity grows linearly with N. There is, however, an even more interesting observation to be made regarding τmax and scrambling time (τ∗). In section 3.2 it was stated that the τ∗ of fast scramblers has a lower bound proportional to log N. It’s easy to see that the τ∗ of the system considered in the present example fulfills this expectation. Consider that information contained in one qubit in the reference state is shared with other qubits as they interact, since interactions are always between pairs, the number of qubits sharing that information after τ steps is given by5 2τ. Therefore, it follows from the definition of scrambling that the system will be scrambled when: 2τ ∼ N, (3.16) 5We’re considering that the same two qubits don’t interact twice, which is a reasonable approxi- mation when dealing with large N. Chapter 3. Complexity in Time Evolution 16 consequently, the amount of steps required for scrambling is logarithmic in N: τ∗ ∼ log2 N. (3.17) Assuming that the physical time associated with these processes is propor- tional to the number of steps, what we see here is that the growth of quantum complexity is a phenomenon very distinct from scrambling in what concerns its time scale. This shows that, while for some purposes the history of a system may be considered over once it thermalizes, if one cares about complexity, thermaliza- tion is merely the beginning. Indeed, the difference in time scales is enormous. For instance, for a mere 100 DOF’s, one finds: τ∗ ∼ log2 100 ∼ 6.5, (3.18) τmax ∼ 4100 ∼ 1060; (3.19) that means that if one step were to be taken as equivalent to one second, while thermalization wouldn’t even take ten seconds, the entire age of the universe would pale in comparison to the time necessary for reaching the complexity plateau. This comparison becomes way more dramatic the bigger the value of N. Although more evidence was gathered for it later, such as given in [37], the behavior described for the time evolution of complexity was initially conjectured in [29, 30], along with the second law of quantum complexity. In very general terms, the statement of this law, for which some intuition has been given here, is that the growth of quantum complexity in a fast scrambler with N DOF’s is a statistical tendency and can be seen as analogous to the growth of entropy in a classical system with a number of DOF’s exponential in N. Looking back at the given example, if one takes the maximum entropy found for the system of N classical bits (eq. 3.3) and plugs in 2N instead of N, then the value becomes 2N log 2, which is proportional to the maximum complexity found of the system of N qubits, a result which illustrates the point. Finally, one may ask why do physicists care about what happens to a system after thermalization if then the observables which are typically of physical interest remain constant in time (setting aside simple curiosity, of course). The answer to this is of utmost importance: complexity can be related via holographic dualities to relevant quantities in gravitational theories, providing helpful insights into the nature of black holes and of gravity itself. In fact, complexity was adopted as an interesting concept in holography in the first place because its behavior mirrors Chapter 3. Complexity in Time Evolution 17 the time evolution of black holes, given the long time scale required to reach the complexity plateau [10, 11, 12]. 3.4 Chaos and Complexity How does one probe, or even define, quantum chaos? This isn’t a simple matter and a lot of work has been done in many-body quantum mechanics trying to deal with aspects of such question [28, 38]. Nevertheless, fundamentally it comes down to the classification of physical systems with respect to their dynamical behavior and, more specifically, to a notion of “predictability”. The initial idea comes from classical mechanics, where the study of chaos started and where such notion could be well matched with the analysis of tra- jectories in phase space. However, it couldn’t be directly translated to quantum mechanics, since trajectories in phase space lose their meaning in this realm. This conception can be formulated in therms of concepts associated to information theory, which have become more common in the literature in recent years. In fact, the concept of complexity seems well suited to deal with the problem of chaos. After all, what does it mean for a system to be more or less predictable than others? It means that, given the initial conditions and the laws governing time evolution, some systems require less computation than others for the prediction of their behavior. This matches the idea of chaos in the classical case, since the prediction of the behavior of chaotic systems requires an actual simulation of them, which means performing an amount of computation that gets bigger and bigger as the system evolves. Furthermore, in the quantum case, complexity is directly related to scrambling and operator growth, important aspects of many-body dynamics. It has been shown in recent years that complexity provides a promising quan- titative framework for the classification of the dynamical behavior of quantum systems and it has been suggested that it may come to supersede the notion of chaos in quantum mechanics [32]. Although geometric complexity itself has been proposed as a measure of chaos [39], Krylov complexity has particularly interesting characteristics. Regardless of referring to distinct concepts, scrambling, operator growth, ther- malization, chaos and complexity growth, must not be regarded as independent physical phenomena, instead, it’s more accurate to regard them as different aspects or different manifestations of the same process. Chapter 4 Krylov Complexity Krylov complexity was initially derived independently from the other notions of quantum computational complexity. It was conceived as a measure of operator growth in the context of many-body quantum mechanics [32]. Nonetheless, it has been shown to be related to geometric complexity [40]. An analogous definition was proposed for the complexity of quantum states, it was named “spread complexity” [41], although it may occasionally be referred to as Krylov complexity as well, since both rely on the so-called Krylov basis. Some of the interesting aspects of Krylov complexity are: • Its inherent relation to time evolution allows it to avoid some of the arbitrari- ness present in geometric complexity (chapter 2), it also makes it particularly suited for the study of directly related phenomena such as chaos. • It can be calculated from two-point functions, which can be very convenient and, for instance, makes it readily applicable to studies in QFT. • It bounds other measures of operator growth or of quantum chaos which were already employed in many-body quantum mechanics previously [32]. This chapter contains only some of the more important definitions necessary to understand this subject and not necessarily in their most general form. Further information such as certain demonstrations and related concepts can be found in the literature. Krylov complexity was defined initially in [32] and spread complexity in [41]. Good reviews of Krylov complexity can be found in [21, 40]. The complexity of operators is more relevant for this project, however, it is instructive to introduce spread complexity first, even though this is opposite to the chronological order in which these concepts were developed. Unless otherwise specified, the Hamiltonians under consideration are time- independent. 18 Chapter 4. Krylov Complexity 19 4.1 Spread Complexity The evolution of a quantum state |ψ(t)⟩ is given by the following differential equation: i d dt |ψ(t)⟩ = Ĥ |ψ(t)⟩ . (4.1) In the case of time-independent Hamiltonians, the solution is given by a time evolution operator with the form of a simple exponential: |ψ(t)⟩ = e−iĤt |ψ⟩ , |ψ⟩ ≡ |ψ(0)⟩ . (4.2) By using the Taylor expansion of the exponential at t = 0, one obtains: |ψ(t)⟩ = ∞ ∑ n=0 (−it)n n! Ĥn |ψ⟩ . (4.3) Therefore, |ψ(t)⟩ can be expanded in terms of the vectors Ĥn |ψ⟩, however, these are not orthogonal in general. The so-called Lanczos algorithm allows one to construct an orthonormal basis starting from |ψ⟩ and proceeding iteratively with the action of Ĥ followed by Gram-Schmidt orthonormalization. The basis obtained in this manner is the Krylov basis1 {|Kn⟩}. Spread complexity K(t) is then defined in terms of the expansion of |ψ(t)⟩ in this basis: K(t) = ∑ n n|ψn(t)|2, ψn(t) ≡ ⟨Kn|ψ(t)⟩ . (4.4) Let’s go through this process in detail. We start by defining the first vector of the basis: |K0⟩ ≡ |ψ⟩ . (4.5) By acting with Ĥ on |K0⟩, we construct a new vector which is, in general, linearly independent from |K0⟩. Then, we subtract from Ĥ |K0⟩ it’s projection onto |K0⟩, obtaining |A1⟩: |A1⟩ ≡ Ĥ |K0⟩ − P̂0 ( Ĥ |K0⟩ ) , P̂j ≡ ∣∣Kj 〉 〈 Kj ∣∣ . (4.6) The next vector of the basis is defined as the normalized version of |A1⟩: 1The Krylov basis doesn’t necessarily span the entire Hilbert space of states, only the subspace that’s explored by |ψ(t)⟩ from t = 0 to infinity. Chapter 4. Krylov Complexity 20 |K1⟩ ≡ |A1⟩ ⟨A1|A1⟩−1/2 . (4.7) We obtain the other vectors of the Krylov basis by repeating these steps and always subtracting from Ĥ |Kn⟩ it’s projection onto the subspace spanned by all the basis vectors defined up to then, i.e.: |An⟩ ≡ Ĥ |Kn−1⟩ − n−1 ∑ j=0 P̂j ( Ĥ |Kn−1⟩ ) , (4.8) |Kn⟩ ≡ |An⟩ ⟨An|An⟩−1/2 . (4.9) We can find further information by analyzing these projections. Consider the following: P̂mĤ |Kn⟩ = |Km⟩ ⟨Km| Ĥ |Kn⟩ , m ≤ n. (4.10) By using eqs. 4.8 and 4.9, we obtain: P̂mĤ |Kn⟩ = |Km⟩ ( ⟨Km+1| ⟨Am+1|Am+1⟩1/2 + m ∑ j=0 ⟨Km| ĤP̂j ) |Kn⟩ , (4.11) but, since the Krylov vectors are orthonormal and m ≤ n, this simplifies to: P̂mĤ |Kn⟩ = ( δn,m+1 ⟨Am+1|Am+1⟩1/2 + δnm ⟨Km| Ĥ |Km⟩ ) |Km⟩ , (4.12) which also implies, by action of ⟨Km| from the left: ⟨Km| Ĥ |Kn⟩ = δn,m+1 ⟨Am+1|Am+1⟩1/2 + δnm ⟨Km| Ĥ |Km⟩ . (4.13) This result clearly shows that the matrix elements Ĥmn ≡ ⟨Km| Ĥ |Kn⟩ are always real and are non-zero if and only if |m− n| ≤ 1. In other words, the Hamiltonian is tridiagonal and symmetric in the Krylov basis. These matrix elements are called Lanczos coefficients and are denoted by an and bn: an ≡ Ĥnn, bn ≡ Ĥn,n−1 = ⟨An|An⟩1/2 , (4.14) Chapter 4. Krylov Complexity 21 {Ĥmn} ←→  a0 b1 0 0 · · · b1 a1 b2 0 · · · 0 b2 a2 b3 · · · 0 0 b3 a3 · · · ... ... ... ... . . .  . (4.15) Additionally, using eq. 4.12, we can rewrite the expressions for |An⟩ and for |Kn⟩ (eqs. 4.8 and 4.9) in terms of the Lanczos coefficients: |An⟩ = Ĥ |Kn−1⟩ − an−1 |Kn−1⟩ − bn−1 |Kn−2⟩ , (4.16) |Kn⟩ = |An⟩ b−1 n . (4.17) With all these results in hand, we can find an expression that relates the coeffi- cients ψn(t) and the Lanczos coefficients. We start by applying the Schrödinger equation (4.1) to the expansion of the state |ψ(t)⟩ in the Krylov basis: |ψ(t)⟩ = ∑ m ψm(t) |Km⟩ , (4.18) ∑ m i d dt ψm(t) |Km⟩ = ∑ m ψm(t)Ĥ |Km⟩ ; (4.19) by acting with ⟨Kn| from the left, we get: i d dt ψn(t) = ∑ m ψm(t)Ĥnm (4.20) ∴ i dψn(t) dt = anψn(t) + bn+1ψn+1(t) + bnψn−1(t), b0 ≡ 0. (4.21) This result allows one to compute the coefficients ψn(t) requiring previous knowl- edge of the Lanczos coefficients and of ψ0(t) = ⟨K0|ψ(t)⟩ = ⟨ψ|ψ(t)⟩. 4.2 Krylov Complexity in the Space of Operators Consider an operator Ô(t) in the Heisenberg picture. Krylov complexity is a measure of the spread of this operator in the Hilbert space of operators. This Chapter 4. Krylov Complexity 22 means that the operator is regarded as a vector and denoted by: Ô(t) ≡ |Ô(t)), (4.22) with an inner product defined as: (Q̂|Ŵ) ≡ ⟨Ω| Q̂†Ŵ |Ω⟩ , (4.23) where Q̂ and Ŵ are any two operators and |Ω⟩ can be any quantum state. The time evolution of Ô(t) is found by application of the Baker-Campbell- Hausdorff formula [42]. By defining the Liouvillian superoperator L in terms of the Hamiltonian Ĥ, it’s possible to write the time evolution of Ô(t) in a way that’s analogous to that of typical quantum states: L|Ô) ≡ [Ĥ, Ô], (4.24) |Ô(t)) = ∞ ∑ n=0 (it)n n! Ln|Ô) = eitL|Ô). (4.25) From here on, everything is defined in analogy to the case of spread complexity. Ô(t) can be expanded in terms of the vectors Ln|Ô) and the Krylov basis {|Ôn)} is built via the Lanczos algorithm. This won’t be shown in detail for this case because it’s the same process described in section 4.1. Krylov complexity K(t) is defined in terms of the coefficients ψn(t) which are, in turn, defined from the expansion of |Ô(t)) in the Krylov basis: Ô(t) ≡∑ n inψn(t)Ôn, ψn(t) ≡ 1 in (Ôn|Ô(t)); (4.26) K(t) ≡∑ n n|ψn(t)|2. (4.27) The relation between the coefficients ψn(t) and the vectors Ln|Ô) is given by the matrix elements Lmn ≡ (Ôm|L|Ôn). If these elements constitute an hermitian matrix2, then it follows that such matrix must also be tridiagonal and symmetric. In this case, its elements are the Lanczos coefficients: 2It can be shown that if the state |Ω⟩ in the definition of the inner product (eq. 4.23) is an eigenstate of Ĥ, this matrix is indeed Hermitian. Chapter 4. Krylov Complexity 23 {Lmn} ←→  a0 b1 0 0 · · · b1 a1 b2 0 · · · 0 b2 a2 b3 · · · 0 0 b3 a3 · · · ... ... ... ... . . .  . (4.28) Additionally, if the operator Ô is hermitian, all coefficients an are 0. The coefficients ψn(t), from which Krylov complexity is calculated, can be obtained from the Lanczos coefficients according to the following equation: dψn(t) dt = ianψn(t)− bn+1ψn+1(t) + bnψn−1(t), (4.29) where b0 ≡ 0 and ψ0(t) = (Ô0|Ô(t)) = (Ô|Ô(t)). 4.3 Finite Temperature Krylov complexity can be generalized for the case of a canonical ensemble. In order to do this, one can redefine the inner product in the Hilbert space of operators in the following way: (Q̂|Ŵ)β ≡ ⟨Q̂†Ŵ⟩β = 1 Z Tr ( Q̂†Ŵe−βĤ ) , Z = Tr ( e−βĤ ) ; (4.30) where β is the inverse temperature and Q̂ and Ŵ are any two operators. Using the cyclicity of the trace, it’s easy to show that: (LnQ̂|Ŵ)β = (Q̂|LnŴ)β ≡ (Q̂|Ln|Ŵ)β. (4.31) The Krylov basis is defined using the same procedure as in the case of pure states and then the matrix elements (Ôm|L|Ôn)β can be obtained, necessarily giving a Hermitian matrix. In the limit of β→ ∞, these definitions reduce to the case of pure states with the ground state in the definition of the inner product (eq. 4.23 with Ω being the ground state). 4.4 The Moment Method In practice, directly implementing the Lanczos algorithm isn’t always the best way of obtaining the Lanczos coefficients. The so-called moment method allows Chapter 4. Krylov Complexity 24 one to obtain the coefficients from a two-point function: C(t, β) ≡ (Ô(t)|Ô)β, or from its Fourier transform C̃(ω, β). In this section, Ô is considered to be a normalized operator, which means: (Ô|Ô)β = 1. The moments Mn are defined as: Mn ≡ (Ô0|Ln|Ô0)β. (4.32) They are related to the coefficients of the Taylor expansion of C(t, β) at t = 0: Mn = 1 (−i)n dnC(t, β) dtn ∣∣∣∣ t=0 , (4.33) C(t, β) = ∞ ∑ n=0 ( dnC(t, β) dtn ∣∣∣∣ t=0 ) tn n! = ∞ ∑ n=0 Mn (−it)n n! . (4.34) They can also be found from C̃(ω, β), often referred to as the spectral function Φ(ω, β): Mn = ∫ ∞ −∞ dω 2π ωnΦ(ω, β), Φ(ω, β) ≡ C̃(ω, β). (4.35) It’s easy too see how the moments and the Lanczos coefficients are related by thinking in terms of matrix multiplication, since: Mn ≡ (Ô0|Ln|Ô0)β = (Ln)00. (4.36) That implies that the coefficients can be extracted from a set of moments. In the case of N + 1 moments (from M0 to MN), with N even, this can be done via a recursive algorithm defined by the following equations: Chapter 4. Krylov Complexity 25 M0 k ≡ (−1)k Mk, (0 ≤ k ≤ N); (4.37) L0 k ≡ (−1)k+1Mk+1, (0 ≤ k ≤ N − 1); (4.38) Mn k ≡ Ln−1 k − Ln−1 n−1 ( Mn−1 k Mn−1 n−1 ) , (0 ≤ k ≤ N − n), (1 ≤ n ≤ N/2); (4.39) Ln k ≡ (Mn k+1 Mn n ) − ( Mn−1 k Mn−1 n−1 ) , (0 ≤ k ≤ N − n− 1), (1 ≤ n ≤ N/2− 1); (4.40) bn = √ Mn n, (1 ≤ n ≤ N/2); (4.41) an = −Ln n, (0 ≤ n ≤ N/2− 1); (4.42) where the upper indices in Mn k or Ln k are not to be confused with powers. These equations can be somewhat confusing, so the following explanation of the algorithm may be useful: 1. One starts by defining the number of moments available. Consider N + 1 moments (from M0 to MN), where N is taken to be even. 2. One obtains {M0 0, M0 1, · · · , M0 N} and {L0 0, L0 1, · · · , L0 N−1} using eqs. 4.37 and 4.38. 3. Using eq. 4.39, one finds {M1 0, M1 1, · · · , M1 N−1}. 4. Using eq. 4.40, one finds {L1 0, L1 1, · · · , L1 N−2}. 5. Again, using eq. 4.39, one finds {M2 0, M2 1, · · · , M2 N−2}. 6. Again, using eq. 4.40, one finds {L2 0, L2 1, · · · , L2 N−3}. 7. The previous steps are repeated many times, untill one finally arrives at {MN/2 0 , MN/2 2 , · · · , MN/2 N/2} and {LN/2 0 , LN/2 1 , · · · , LN/2 N/2−1}. 8. Now eqs. 4.41 and 4.42 are employed and the following sets of Lanczos coefficients are obtained: {b1, b2, · · · , bN/2} and {a0, a1, · · · , aN/2−1}. If N had been picked to be odd, one would obtain the same number of coef- ficients an and bn, so there would be an extra an coefficient that isn’t employed when applying eq. 4.29. Chapter 4. Krylov Complexity 26 Since the Krylov complexity can be obtained from the Lanczos coefficients via eq. 4.29 and the Lanczos coefficients can be extracted from the moments, it follows that the Krylov complexity of any operator is encoded in the two-point function C(t, β). This is the most important conceptual point to be made here. 4.5 A Bound on Operator Growth Aside from defining Krylov complexity, the authors of [32] proposed that the classification of quantum systems according to their dynamical behavior could be done in terms of the growth rate of the Lanczos coefficients bn. They showed that the maximum possible growth rate for the bn’s associated to any local operator is asymptotically linear in n and they hypothesized, with significant evidence to support it, that this bound is achieved in chaotic systems, while in non-chaotic systems, the bn’s grow more slowly or even don’t grow (which includes oscillation around a fixed value). An example of the linear behavior is the Sachdev-Ye-Kitaev (SYK) model, very prominent in studies of operator growth. In interacting many-body systems, the spectral function Φ(ω, β) exhibits a decaying tail extending to arbitrarily high values of |ω|. The asymptotic decay rate of this tail is directly related to the asymptotic growth of the coefficients bn in the following manner: bn ∼ nγ ⇐⇒ |Φ(ω, β)| ∼ exp ( − ∣∣∣∣ ω ω0 ∣∣∣∣1/γ ) , (4.43) for a real number γ > 0 and a constant ω0. Therefore, faster growth of bn translates to slower decay of Φ(ω, β) and in the case of γ = 1, the linear growth of bn corresponds to exponential decay of Φ(ω, β). Still in this particular case, it’s possible to quantitatively relate the two rates via a real number α > 0 as follows: bn = αn + O(1)⇐⇒ |Φ(ω, β)| = exp ( − ∣∣∣∣ ω ω0 ∣∣∣∣+ o(ω) ) , ω0 = α 2 π ; (4.44) where we employ the the big-O and little-O notation in O(1) and o(ω) respectively. The coefficient α has quite a central role since it also quantifies complexity growth. Linear growth of bn corresponds to exponential growth of Krylov com- plexity K(t) with exponent 2αt: Chapter 4. Krylov Complexity 27 bn ∼ αn⇐⇒ K(t) ∼ e2αt. (4.45) This provides an upper bound for other measures of operator growth previously employed in the study of quantum chaos [32]. Chapter 5 The Toy Model The model reviewed by Maldacena in [22] is the simplest quantum model giving rise to an holographic dual spacetime governed by Einstein’s equations. Its emergent gravitational solution contains a black hole in an AdS “box universe” where the gravitational potential grows monotonically as one goes towards the boundary. Due to this, any particle traveling towards the boundary is reflected back to the black hole in a finite amount of time, meaning that nothing can escape to infinity and that thermal equilibrium is possible. This model won’t be reviewed here, but it’s a quantum model in (0 + 1) dimensions with an action that’s invariant under the fundamental representation of SO(9) and the adjoint representation of SU(N), and which contains bosonic and fermionic harmonic oscillators. As mentioned in chapter 1, our goal is to study Krylov complexity in the regime where a black whole reaches thermal equilibrium with it’s radiation. In order to do so, we’re employing a toy model inspired by the one reviewed by Maldacena, but with several modifications: all the fermionic oscillators were removed, the SO(9) symmetry was replaced by SO(3) and one interaction term was also re- moved. However, since we want to take into consideration the interaction between the black hole and the Hawking radiation, we’ve included an extra oscillator ϕ, supposed to describe the radiation. 28 Chapter 5. The Toy Model 29 Our toy model in (0 + 1) dimensions is governed by the following action: S = ∫ ∞ −∞ dt ( LX + Lϕ + LI ) , LX = N λ Tr { 3 ∑ I=1 1 2 [ (Ẋ I)2 − (mxX I)2 ]} , Lϕ = N λ Tr { 1 2 [ ϕ̇2 − (mϕϕ)2 ]} , LI = N λ Tr { −i(mx + g̃ϕ) 3 ∑ I,J,K=1 ϵI JKX IX JXK } ; (5.1) where the time dependence of all matrices is omitted. This action is invariant under SO(3) and under SU(N). The X’s and ϕ belong to the adjoint representation of SU(N) and, therefore, can be written as N × N traceless hermitian matrices (about adjoint representations, one can consult: [35, 43]). X I transforms under the fundamental representation of SO(3), hence the uppercase index. While ϕ is supposed to represent the Hawking radiation around the black hole, the X’s are supposed to represent the internal DOF’s of the black hole. We define the generators Ta of SU(N) in terms of the structure constants fabc in a way that gives a convenient normalization for the adjoint representation of the Lie algebra: Tr(TaTb) ≡ λ N δab, [Tb, Tc] ≡ i f a bc Ta √ λ N . (5.2) Taking the generators Ta as the basis for the adjoint representation of SU(N), the matrices X I and ϕ are given by components X Ia and ϕa: X I ≡ TaX Ia, ϕ ≡ Taϕa. (5.3) The transformation of a member X I of the adjoint representation of the Lie algebra of SU(N) under the action of a member g of SU(N) is given by: gXg−1, (5.4) which means that the action written in terms of traces is manifestly invariant under SU(N). Even so, one could decide rewrite it in terms of the components X Ia and ϕa. Chapter 5. The Toy Model 30 Since the matrices in the action are hermitian: X I ij = X I† ij = (X I ji) †, (5.5) where i, j indicate the matrix elements. This means that the momentum conjugate to X I ij is Ẋ I ji = (Ẋ I ij) †, and canonical quantization is done by imposing: [X I ij, PJ kl] = iδI Jδikδjl, PJ kl ≡ Ẋ J lk; (5.6) [ϕij, ρkl] = iδikδjl, ρkl ≡ ϕ̇lk. (5.7) Therefore, the Hamiltonian is: H = HX + Hϕ + HI , HX = N λ Tr { 3 ∑ I=1 1 2 [ (PI)2 + (mxX I)2 ]} , Hϕ = N λ Tr { 1 2 [ ρ2 + (mϕϕ)2 ]} , HI = N λ Tr { i(mx + g̃ϕ) 3 ∑ I,J,K=1 ϵI JKX IX JXK } . (5.8) Creation and annihilation operators are defined for X as: AI ij = 1√ 2mX ( mXX I ij + iPI ji ) , (5.9) (AI ij) † = 1√ 2mX ( mXX I ji − iPI ij ) ; (5.10) with the typical commutator: [AI ij, (AJ kl) †] = δI Jδikδjl, (5.11) and the following inverse relations: Chapter 5. The Toy Model 31 X I ij = 1√ 2mX (AI ij + AI† ij ), (5.12) PI ij = √ 2mX 2i (AI ij − AI† ij ). (5.13) The creation and annihilation operators are defined for ϕ in an analogous way, but in this case they shall be denoted by lowercase aij and (aij) †, instead of AI ij and (AI ij) †. Rewritten in terms of these operators, the free part of the Hamiltonian becomes: HX + Hϕ = N λ Tr ( mX AI† AI + mϕa†a ) , (5.14) with zero point energies dropped. Finally, we expect this model to adequately describe the desired system in the regime dictated by the following conditions over the model’s parameters: m << T << λ1/3, (5.15) N−1/3λ1/3 << T, (5.16) 1 << N; (5.17) where m stands for both mX and mϕ, and T stands for temperature. These condi- tions can be rewritten as: m T << 1, (5.18) T λ1/3 << 1, (5.19) T λ1/3 N1/3 >> 1, (5.20) N >> 1. (5.21) Chapter 6 Dyson-Schwinger Equations for the Toy Model 6.1 From Action to Complexity We can, in principle, use any local operator to probe the growth of Krylov complexity in our model, as per the discussion about operator growth presented in section 4.5. Inspired by the work done with the IP matrix model [21], we’ve decided to attempt to use the moment method (section 4.4) to extract the Lanczos coefficients and the Krylov complexity of an operator from its two-point function. Therefore, our fist task is to obtain a two-point function, and since the regime of interest for our toy model requires finite temperature, the two-point function must be thermal. We intend to obtain it perturbatively and using the Dyson-Schwinger (DS) equations, this shall give us a time-ordered thermal two-point function of a matrix component of either X or ϕ, which still isn’t really what we need, since it’s time-ordered. However, in appendix C, we derive a general relation between a non time-ordered two-point function and its time-ordered counterpart that can be employed to solve this problem. The DS equations are the quantum equivalent of the classical equations of motion and they give n-point functions in terms of n-point functions. They are exact equations and, therefore, are more general then the equations obtained up to a certain order in a perturbative expansion. In fact, they can be used to derive the terms of a perturbative expansion up to any order; this is done by iterating DS equations and cutting them at a certain point. In this chapter, we’ll derive DS equations for the toy model. We’ll derive one equation for a matrix component of X algebraically in detail and then proceed dia- grammatically. Some results related to the path integral formalism, imaginary time or the DS equations won’t be demonstrated here. For information about the path integral formalism in imaginary time, the following sources are recommended: 32 Chapter 6. Dyson-Schwinger Equations for the Toy Model 33 [44, 45, 46]; for information about the DS equations: [46, 47, 48]. The derivation of the DS equations is more easily done using the path integral approach, although it wasn’t done like this originally. Since we want to obtain thermal correlators, we’ll work with the generating functional in the imaginary time formalism. The Euclidean action is: SE = S0 + SI , (6.1) S0 = N λ ∫ β 0 dτ Tr { 1 2 [ ϕ̇2 + (mϕϕ)2 ] + 3 ∑ I=1 1 2 [ (Ẋ I)2 + (mxX I)2 ]} , (6.2) SI = N λ ∫ β 0 dτ Tr { i(mx + g̃ϕ) 3 ∑ I,J,K=1 ϵI JKX IX JXK } . (6.3) The normalized generating functional is: Zβ(J) = ∫ dΛe−SE+SJ∫ dΛe−SE ≡ Zβ(J) Zβ(0) ; (6.4) where SJ is the source term, containing sources for each DOF (J I ij is a source for X I ij and Jϕ ij for ϕij): SJ ≡ N λ N ∑ i,j=1 ∫ β 0 dτ ( ϕij J ϕ ij + ∑ I X I ij J I ij ) , (6.5) and the integration measure dΛ: dΛ ≡∏ i,j,I DX I ijDϕij. (6.6) Time-ordered correlation functions are obtained by functional differentiation: δ δJ I ij(τa) δ δJ J kl(τb) Zβ(J) ∣∣∣∣∣ J=0 = ⟨Ť X I ij(−iτa)X J kl(−iτb)⟩β ≡ ⟨Ť X I ij(τ̌a)X J kl(τ̌b)⟩β, (6.7) where we have introduced the notation τ̌ ≡ −iτ. The operators X I ij(−iτa) are typical Heisenberg picture operators, but evolving in imaginary time: Chapter 6. Dyson-Schwinger Equations for the Toy Model 34 X I ij(−iτa) = eiH(−iτa)(X I ij)e −iH(−iτa) = eHτa(X I ij)e −Hτa , (6.8) and the time-ordering operator Ť acts with respect to τ and not τ̌: Ť ( X I ij(−iτa)X J kl(−iτb) ) = X J kl(−iτb)X I ij(−iτa), if τb > τa. (6.9) The free part of the euclidean action (S0) can be rewritten in the following manner: S0 = N λ ∫ β 0 dτ Tr { 1 2 [ ϕ̇2 + (mϕϕ)2 ] + 3 ∑ I=1 1 2 [ (Ẋ I)2 + (mxX I)2 ]} (6.10) = N λ 1 2 ∑ i,j ∫ β 0 dτa ∫ β 0 dτb · ( ϕij(τa)∆−1 ϕ (τa, τb)ϕji(τb) + ∑ I X I ij(τa)∆−1 X (τa, τb)X I ji(τb) ) , (6.11) where ∆−1 X and ∆−1 ϕ are the inverse operators to the free thermal propagators: ∆X(τa, τb) ≡ ⟨Ť X I ij(τ̌a)X I ji(τ̌b)⟩β,0 ∀ i, j, I, (6.12) δ(τa − τc) = ∫ β 0 dτb∆X(τa, τb)∆ −1 X (τb, τc). (6.13) 6.2 Algebraic Derivation of a Dyson-Schwinger Equa- tion The starting point for the derivation is the identity: ∫ dΛ ( δ δXT µν(τh) ) e−SE+SJ = 0, (6.14) which is analogous to the simpler case: ∫ b a dx d dx f (x) = f (x)|ba = 0, valid for f (a) = f (b). (6.15) That becomes: Chapter 6. Dyson-Schwinger Equations for the Toy Model 35 0 = ∫ dΛ ( δ δXT µν(τh) (−SE + SJ) ) e−SE+SJ (6.16) = ∫ dΛ ( − δSE δXT µν(τh) + JT µν(τh) ) e−SE+SJ (6.17) = JT µν(τh)Zβ(J)− ∫ dΛ ( δSE δXT µν(τh) ) e−SE+SJ ; (6.18) where we have ignored (and will continue to ignore) the N/λ factor from the action, because it’s irrelevant (we could have, for instance, multiplied equation 6.16 by λ/N and gotten rid of it). Considering again a simpler analogous case: ∫ dx f (x)eJx = ∫ dx f ( d dJ ) eJx = f ( d dJ ) ∫ dxeJx, (6.19) valid because: d dJ eJx = xeJx; (6.20) similarly, eq. 6.18 is equivalent to: 0 = JT µν(τh)Zβ(J)− ∫ dΛ  δSE δXT µν(τh) ∣∣∣∣∣ δ/δJ  e−SE+SJ (6.21) = JT µν(τh)Zβ(J)−  δSE δXT µν(τh) ∣∣∣∣∣ δ/δJ  ∫ dΛ ( e−SE+SJ ) (6.22) = JT µν(τh)−  δSE δXT µν(τh) ∣∣∣∣∣ δ/δJ  Zβ(J), (6.23) where |δ/δJ means substituting all the X′s and ϕ′s in the expression by the func- tional derivatives with respect to their respective sources, always using DOF’s and sources at the same τ. Dividing eq. 6.23 by Zβ(0), we find: Chapter 6. Dyson-Schwinger Equations for the Toy Model 36 0 = JT µν(τh)−  δSE δXT µν(τh) ∣∣∣∣∣ δ/δJ Zβ(J) (6.24) = JT µν(τh)−  δ(S0 + SI) δXT µν(τh) ∣∣∣∣∣ δ/δJ Zβ(J), (6.25) with SI indicating the interacting part of the euclidean action. Using the expression obtained for δS0/δXT µν(τh) in appendix D, this becomes: 0 = JT µν(τh)− (∫ β 0 dτb∆−1 X (τh, τb)XT νµ(τb) + δSI δXT µν(τh) )∣∣∣∣∣ δ/δJ Zβ(J); (6.26) acting with ∫ β 0 dτh∆X(τ1, τh) from the left: 0 = ∫ β 0 dτh∆X(τ1, τh)JT µν(τh)Zβ(J) − ( XT νµ(τ1) + ∫ β 0 dτh∆X(τ1, τh) δSI δXT µν(τh) )∣∣∣∣∣ δ/δJ Zβ(J); (6.27) rearranging: δZβ(J) δJT νµ(τ1) = ∫ β 0 dτh∆X(τ1, τh) JT µν(τh)− ( δSI δXT µν(τh) )∣∣∣∣∣ δ/δJ Zβ(J). (6.28) In order to find the intended two-point function, we must act with one more functional derivative from the left: Chapter 6. Dyson-Schwinger Equations for the Toy Model 37 δ δJA ηγ(τ2) δ δJT νµ(τ1) Zβ(J) = ∆X(τ2, τ1)δµηδγνδATZβ(J) + ∫ β 0 dτh∆X(τ1, τh)JT µν(τh) δ δJA ηγ(τ2) Zβ(J) − ∫ β 0 dτh∆X(τ1, τh) δ δJA ηγ(τ2) ( δSI δXT µν(τh) )∣∣∣∣∣ δ/δJ Zβ(J). (6.29) If we were to take more derivatives, the term multiplied by JT µν(τh) would originate other terms, but since that’s not the case and we’ll set J = 0 at the end, this term is no longer needed and we’ll consider it gone from here on. Now we substitute the explicit expression for δSI/δXT µν(τh), derived in ap- pendix D, to obtain: δ δJA ηγ(τ2) δ δJT νµ(τ1) Zβ(J) = ∆X(τ2, τ1)δµηδγνδAT − iϵI JKδIT ∑ i,j ∫ β 0 dτh∆X(τ1, τh) δ δJA ηγ(τ2) [ 3mXX J νi(τh)XK iµ(τh) + g̃ ( X J νi(τh)XK ij (τh)ϕjµ(τh) + X J νi(τh)ϕij(τh)XK jµ(τh) + ϕνi(τh)X J ij(τh)XK jµ(τh) )]∣∣∣∣∣ δ/δJ Zβ(J), (6.30) which becomes: δ δJA ηγ(τ2) δ δJT νµ(τ1) Zβ(J) = ∆X(τ2, τ1)δµηδγνδAT − iϵI JKδIT ∑ i,j ∫ β 0 dτh∆X(τ1, τh) δ δJA ηγ(τ2) [ 3mX δ δJ J νi(τh) δ δJK iµ(τh) + g̃ ( δ δJ J νi(τh) δ δJK ij (τh) δ δJϕ jµ(τh) + δ δJ J νi(τh) δ δJϕ ij(τh) δ δJK jµ(τh) + δ δJϕ νi(τh) δ δJ J ij(τh) δ δJK jµ(τh) )] Zβ(J). (6.31) Chapter 6. Dyson-Schwinger Equations for the Toy Model 38 Once we apply the functional derivatives and set J = 0, we obtain, in accordance with eq. 6.7, the two-point function in imaginary time: ⟨Ť XA ηγ(τ̌2)XT νµ(τ̌1)⟩β = ∆X(τ2, τ1)δµηδγνδAT − iϵI JKδIT ∑ i,j ∫ β 0 dτh∆X(τ1, τh) [ 3mX⟨Ť XA ηγ(τ̌2)X J νi(τ̌h)XK iµ(τ̌h)⟩β + g̃ ( ⟨Ť XA ηγ(τ̌2)X J νi(τ̌h)XK ij (τ̌h)ϕjµ(τ̌h)⟩β + ⟨Ť XA ηγ(τ̌2)X J νi(τ̌h)ϕij(τ̌h)XK jµ(τ̌h)⟩β + ⟨Ť XA ηγ(τ̌2)ϕνi(τ̌h)X J ij(τ̌h)XK jµ(τ̌h)⟩β )] . (6.32) Chapter 6. Dyson-Schwinger Equations for the Toy Model 39 6.3 Diagrammatics The terms on the right side of eq. 6.32 can be represented diagrammatically in the following way: ∆X(τ2, τ1)δµηδγνδAT = T1 A 2 µ ν η γ , (6.33) − i(3mX)ϵI JKδIT ∑ i ∫ β 0 dτh∆X(τ1, τh)⟨Ť XA ηγ(τ̌2)X J νi(τ̌h)XK iµ(τ̌h)⟩β = 1 T hI K J A 2i µ ν η γ , (6.34) − i(g̃)ϵI JKδIT ∑ i,j ∫ β 0 dτh∆X(τ1, τh)⟨Ť XA ηγ(τ̌2)X J νi(τ̌h)XK ij (τ̌h)ϕjµ(τ̌h)⟩β = 1 T hI K ϕ J A 2 j i µ ν η γ . (6.35) The box represents an n-point function with external points determined by the legs going into it; one can think of it as containing all possible ways of connecting the incoming and outgoing legs with any number of additional vertices. Consider one more diagram, but this time, a diagram without the box, i.e., a diagram in which all vertices and external points are connected to other vertices or external points, like the ones that show up in perturbative expansions: Chapter 6. Dyson-Schwinger Equations for the Toy Model 40 1 T hI K ϕ J V 2 j i µ ν η γ mC A ϕ B = (−i)2(g̃2)δµηδνγϵI JKϵABCδTIδAV ∫ β 0 ∫ β 0 dτhdτm∆X(τ1, τh)∆X(τm, τ2) · ∑ ij ⟨ϕµj(τ̌h)ϕjµ(τ̌m)⟩β,0⟨X J ji(τ̌h)XB ij(τ̌m)⟩β,0⟨XK νi(τ̌h)XC iν(τ̌m)⟩β,0 (6.36) = (−i)2(Ng̃)2δµηδνγϵI JKϵABCδTIδAVδJBδKC · ∫ β 0 ∫ β 0 dτhdτm∆X(τ1, τh)∆X(τm, τ2)∆ϕ(τh, τm)(∆X(τh, τm)) 2. (6.37) One can read the Feynman rules of the theory from the given examples. Before going any further, let’s make some considerations about the problem at hand. From the structure of the available vertices, which shall be called X- vertex (three-legged) and ϕ-vertex (four-legged), it’s possible to infer some things about the diagrams that can show up in a perturbative expansion for a two-point function (that means we’re considering only diagrams without the box): • All diagrams must have an even number of each kind of vertex and, by consequence, also an even total number of vertices. • While each pair of ϕ-vertices introduces two loops, each pair of X-vertices introduces only one loop1. Each loop contains a sum over one matrix index and, therefore, a factor of N. This means that, in principle, the ϕ-vertices are more powerful. If we want to obtain a two-point function perturbatively, then both vertices must behave perturbatively, which means that both Nm2 X and (Ng̃)2 must be small. Since the ϕ-vertex is more powerful, it’s not difficult to find a set of parameters for which the ϕ-vertex dominates. The conditions for this can be written as: 1Here we’re taking into consideration the fact that there are only two external points, which means that for every extra line generated in a vertex, another line must, at some point, go into a vertex. We’re also relying on the suppression of non-planar diagrams. Chapter 6. Dyson-Schwinger Equations for the Toy Model 41 g̃2 << 1 N2 , (6.38) m2 X << 1 N , (6.39) m2 X N << g̃2. (6.40) If we set m2 X = 1/(Ny), for some real constant y > 1, then combine this with inequalities 6.38 and 6.40, we obtain: 1 y << (Ng̃)2 << 1. (6.41) Therefore, by satisfying this condition, we’re able not only to find a two-point function perturbatively, but also to do so without taking into consideration any diagrams containing the X-vertex. Hence, from now on, this vertex will be com- pletelly disregarded. Additionaly, we’ll adopt a simpler notation: 1 T hI K ϕ J A 2 j i µ ν η γ ≡ . (6.42) Since at large N non-planar diagrams are suppressed, not much is lost with this change. By keeping in mind that we’re actually dealing with double line diagrams and knowing how to translate between notations, we can use the simpler notation effectively. Let’s get back to the X two-point function. According to the DS equation we already derived (eq. 6.32), it’s given in terms of a four-point function with three X’s and one ϕ: = + + + . (6.43) Chapter 6. Dyson-Schwinger Equations for the Toy Model 42 Since all the external points are X’s, the ϕ leg has nowhere to go but to another ϕ vertex, this allow us to rewrite the equation as: = + + + . (6.44) We see that the X two-point function is now given in terms of an X six-point function. We can use the DS equations algorithm to find an expression for this six-point function. In this algorithm, one starts by drawing the n-point function on the left side of the equation and selecting one of the external points. Then, the first diagrams drawn on the right side correspond to free propagation from the selected external point to one of the other external points while the remaining ones propagate into the box (all possibilities must be covered). Moving on, one then draws diagrams corresponding to free propagation from the selected point to one added vertex, with the additional legs of the vertex propagating into the box along with the other external points; all possible types of vertices must be covered. The freedom one has in selecting the special external point is equivalent to the freedom in the order of application of the functional derivatives to the generating functional. Nothing is lost with the selection of one point instead of another, because all the possibilities for all the other external points are contained inside the box. Proceeding in this manner, we find the X six-point function: Chapter 6. Dyson-Schwinger Equations for the Toy Model 43 =  +   +  +  +   + + + , (6.45) where we’ve introduced two vertices at once in the rightmost diagrams by using the same argument employed in the passage from eqs. 6.43 to 6.44. Now we write an equation for the four-point function that appeared in eq. 6.45: = ( ) + ( ) + ( ) + + + . (6.46) We can proceed indefinitely with this algorithm, obtaining n-point functions with higher n every time. Now we’ll show how the terms of a perturbative expansion are generated by iteration and cutting of the DS equations. If we take eq. 6.44 and cut it, by considering the four-point function to be free, we obtain all the diagrams up to order two of the perturbative expansion. Making the four-point function free means performing all possible connections between the legs that are going into Chapter 6. Dyson-Schwinger Equations for the Toy Model 44 the box, for instance: f ree ==⇒ + + + (. . . ) . (6.47) Out of all the possible connections in this example, we keep only the first, because all the others are either non-planar or connect two legs from the same vertex, which necessarily gives zero because of the Levi-Civita symbol in the interaction terms of the action. Therefore, by proceeding as described, we obtain: = + + + +O(g̃4), (6.48) where O(g̃4) stands for all terms of order 4 or higher. One can calculate a two-point function perturbatively in two different ways, either by writting the perturbative expansion explicitly and then computing all diagrams, or by seting up a set of DS equations and solving them iteratively, starting with all n-point functions free and then using the result of the previous iteration at each new iteration. With the first option, it’s necessary to make sure that all possible diagrams up to the desired order have been drawn, whilst with the second option, it’s necessary to make sure that the point of cutting of the equations is adequate, i.e., the n-point functions with highest n employed are enough to get all the possible diagrams up to the desired order. Since for our purposes we can choose between a correlator for X or for ϕ (whatever is more convenient), we’ll now obtain a few more DS equations for the Chapter 6. Dyson-Schwinger Equations for the Toy Model 45 ϕ two-point function. We start with: = + . (6.49) The four-point function with three X’s and one ϕ is given by: = ( ) + ( ) + + + . (6.50) If we substitute eq. 6.50 into eq. 6.49 and then take the six-point function to be free, we obtain the perturbative expansion up to second order: = +O(g̃4) . (6.51) Going further, we find the six-point function with four X’s and two ϕ’s: = + , (6.52) and the eight-point function with seven X’s and one ϕ: Chapter 6. Dyson-Schwinger Equations for the Toy Model 46 =  +  +   +  +  +   + + + . (6.53) Chapter 7 Conclusion and Perspectives This dissertation covers a work in progress. The main goal of our project is to study the growth of Krylov complexity in a toy model for an AdS black hole in thermal equilibrium with its Hawking radiation. Some of our progress so far has been reported: we’ve been able to define a toy model and to derive Dyson- Schwinger equations in imaginary time. The next steps involve the computation of the two-point function, the translation to real time, and the implementation of the moment method in order to calculate the Lanczos coefficients and the Krylov complexity itself. Obtaining the two point function in imaginary time will involve first Fourier transforming imaginary time propagators to the frequency domain and then solving the equations computationally (performing Matsubara frequency sums). We have a code under development in Mathematica that shall be capable of doing this symbolically in an efficient manner by employing the residue theorem. Once we have the two-point function, we shall perform an analytical contin- uation to real time and find the spectral density. Then, the next step will be to implement the moment method, for which we also have some codes prepared. These are the short time perspectives for our project. Interesting progress has been made so far, but there’s certainly still a lot to be done. The long term perspectives involve possible adaptations of the model depending on the initial results and also the study of the gravitational solution via holographic dualities, but this is a whole other story and it will certainly provide us with a lot more ground for research. 47 Appendix A Conventions Natural units are used in every part of this text, therefore, one shall not expect to see h̄ (Planck’s constant) or kB (Boltzmann’s constant) anywhere. Hats ( ˆ ) are always used to denote operators acting on a Hilbert space, except in chapters 5 and 6, where they are omitted. Einstein’s notation is used to make sums implicit whenever a pair of indices are contracted and a pair of contracted indices is made evident by the contrast between one upper and one lower index. The only exception to this is the case of matrix indices: these are always lower and their sums are always explicit. Additionally, one must be careful and properly distinguish by context between indices and powers. When dealing with the components of some matrix M: M† ij ≡ (M†)ij, (A.1) which is, in general, different from (Mij) †. When dealing with Heisenberg picture operators, an operator Ô(t) can be written simply as Ô if t = 0. The symbol ϵ (without indices) must always be understood as a real and positive infinitesimal number, i.e., expressions containing it are actually defined in the limit ϵ→ 0+. As usual in physics literature, this limit will be implicit. When containing indices, as in ϵI JK, this denotes the totally anti-symmetric tensor, also called the Levi-Civita symbol, with the following properties: ϵi1i2i3···in = −ϵi2i1i3···in , ϵaai3···in = 0, ϵ1,2,3,··· ,n = 1. (A.2) One dimensional Fourier transforms are defined as: f̃ (ω) = ∫ ∞ −∞ dteiωt f (t), (A.3) where f (t) is a function in the time domain and f̃ (ω) is the Fourier transformed 48 Appendix A. Conventions 49 function in the frequency domain. The inverse Fourier transform is: f (t) = ∫ ∞ −∞ dω 2π e−iωt f̃ (ω). (A.4) The integral form of the Dirac delta is given by: δ(a− b) = ∫ ∞ −∞ dx 2π eix(a−b). (A.5) The integral form of the Heaviside function is given by: θ(t) = i ∫ ∞ −∞ dx 2π e−ixt x + iϵ . (A.6) Appendix B Shannon Entropy Shannon entropy [23] is a concept in information theory that generalizes differ- ent notions of entropy appearing in physics. It measures the amount of uncertainty one has about a random variable depending on the probability distribution associ- ated with the possible values that variable can assume. Equivalently, it can also be thought of as the amount of information one stands to gain upon learning the value of the variable. If a variable can assume a certain set of N values, each with probability pn, where ∑ pn = 1, then the Shannon entropy of the variable is: S = −∑ n pn log pn. (B.1) It’s clear that the entropy doesn’t depend on any specific characteristic of the variable in question, only on the probability distribution. Consider, for instance, a coin flip. For all practical purposes the result of the coin flip is random, with 50% probability of the coin landing on either side. Therefore, the entropy associated with the coin flip is: S = −2 ( 1 2 log 1 2 ) = − log 1 2 = log 2. (B.2) If the log is evaluated with base two, then the result is one. Different choices of base for the log give entropies related by a constant factor: −∑ n pn loga pn = −∑ n pn ( logb pn logb a ) = − ( 1 logb a ) ∑ n pn logb pn, (B.3) these are simply regarded as different units for entropy. If base two is employed, the entropy is given in bits. In the case of the coin flip, the entropy is one bit. Imagine now that we have a special coin where the outcomes of the flip have probabilities 1/3 and 2/3. In this case the entropy is: 50 Appendix B. Shannon Entropy 51 S = − ( 1 2 log2 1 2 + 2 3 log2 2 3 ) = 1 2 log2 2 + 2 3 (1− log2 3) ≈ 0.11, (B.4) much smaller than the previous result. In fact, the maximum entropy is always associated with probability distributions where all pn’s are equal. That’s the case when one has the least information about the value the variable will assume. The entropy then depends simply on the number of possible values of the variable: S = − N ∑ n=1 1 N log 1 N = log N. (B.5) It’s easy to see that Shannon entropy is equivalent to some definitions of entropy in physics. The von Newmann entropy, or entanglement entropy, for instance, is given by: S = −Tr(ρ̂ ln ρ̂), (B.6) where ρ̂ is the density matrix representing a quantum state. In the basis where ρ̂ is diagonal, this becomes the Shannon entropy, but since the dentity matrix can always be diagonalized and the trace is independent of basis, we can always relate the von Newmann entropy with Shannon entropy. Other examples are: • The entropy of a canonical ensemble, which is a direct implementation of Shannon entropy. • The entropy of a microcanonical ensemble, corresponding to Shannon en- tropy applied to the case where all probabilities are equal and the result is, therefore, given simply by the log of the number of possible states, the microstates of the system. Appendix C Extracting a Two-Point Correlator From Its Time-Ordered Counterpart As explained in section 4.4, the moment method allows the Krylov complexity of an operator Ô to be calculated from the following correlator: C(t, β) ≡ (Ô(t)|Ô)β = ⟨Ô†(t)Ô⟩β, (C.1) or the spectral function Φ(ω, β) ≡ C̃(ω, β). Here we derive an equation which allows C̃(ω, β) to be obtained from the Fourier transform of the time-ordered cor- relator G̃(ω, β). In this demonstration, the β’s inside the arguments of correlation functions are omitted. The time-ordered correlator is defined as: G(t) ≡ ⟨T Ô†(t)Ô⟩β = θ(t)C+(t) + θ(−t)C−(t), (C.2) where T denotes time-ordering, and: C+(t) ≡ C(t) = ⟨Ô†(t)Ô⟩β, (C.3) C−(t) ≡ ⟨ÔÔ†(t)⟩β. (C.4) Its Fourier transform is given by: G̃(ω) ≡ ∫ ∞ −∞ dteiωtG(t). (C.5) From the properties of the Heaviside function θ(t), one sees that: ∫ ∞ 0 dteiωtG(t) = ∫ ∞ 0 dteiωtC(t). (C.6) Additionally, in the beggining of section 4 of the IP complexity paper [21], it was shown that: 52 Appendix C. Extracting a Two-Point Correlator From Its Time-Ordered Counterpart 53 C̃(ω) = ∫ ∞ 0 dteiωtC(t) + (∫ ∞ 0 dteiωtC(t) )∗ . (C.7) Therefore, one obtains: C̃(ω) = ∫ ∞ 0 dteiωtG(t) + (∫ ∞ 0 dteiωtG(t) )∗ = ∫ ∞ 0 dt ( eiωtG(t) + e−iωtG∗(t) ) , (C.8) and: C̃(ω) = ∫ ∞ −∞ dω′ ∫ ∞ 0 dt 2π ( eit(ω−ω′)G̃(ω′) + eit(ω′−ω)G̃∗(ω′) ) . (C.9) Now, it’s well known that1: ∫ ∞ 0 dt 2π eit(ω−ω′) = 1 2 [ δ(ω−ω′)− i π P ( 1 ω−ω′ )] , (C.10) where P indicates the Cauchy principal value and the expression is properly defined when inside an integral over ω or ω′. Applying this result to eq. C.9, one obtains C̃(ω) from G̃(ω): C̃(ω) = 1 2 [ G̃(ω) + G̃∗(ω)− i π P ∫ ∞ −∞ dω′ ( G̃(ω′) ω−ω′ + G̃∗(ω′) ω′ −ω )] (C.11) ∴ C̃(ω) = Re G̃(ω)− 1 π P ∫ ∞ −∞ dω′ ( Im G̃(ω′) ω′ −ω ) . (C.12) 1This can be derived from manipulation of the fourier transform of the Heaviside function with application of the residue theorem. Appendix D Functional Derivative of the Euclidean Action With Respect to X D.1 Functional Derivative of the Free Part From the free part of the Euclidean action, as given in eq. 6.11, we obtain: δS0 δXT µν(τh) = N λ 1 2 ∫ β 0 dτa ∫ β 0 dτb δ δXT µν(τh) ( ∑ i,j,I X I ij(τa)∆−1 X (τa, τb)X I ji(τb) ) (D.1) = N λ 1 2 ∫ β 0 dτa ∫ β 0 dτb δ δXT µν(τh) · ( XT µν(τa)∆−1 X (τa, τb)XT νµ(τb) + XT νµ(τa)∆−1 X (τa, τb)XT µν(τb) ) , (D.2) which is the same as: δS0 δXT µν(τh) = N λ ∫ β 0 dτa ∫ β 0 dτb δ δXT µν(τh) ( XT µν(τa)∆−1 X (τa, τb)XT νµ(τb) ) . (D.3) After the derivative is applied and the resulting dirac delta is integrated, this becomes: δS0 δXT µν(τh) = N λ ∫ β 0 dτb ( ∆−1 X (τh, τb)XT νµ(τb) ) . (D.4) 54 Appendix D. Functional Derivative of the Euclidean Action With Respect to X 55 D.2 Functional Derivative of the Interacting Part In this demonstration, the time dependence of all the X’s and ϕ’s will be omitted, but it is to be understood that in eqs. D.5 and D.6, they all depend on τ and from then on they all depend on τh. Also ∑ must be taken to mean sum over all matrix indices available in each term except for µ and ν. From the Interacting part of the Euclidean action, given in eq. 6.3, we obtain: δSI δXT µν(τh) = N λ ∫ β 0 dτ δ δXT µν(τh) Tr { i(mx + g̃ϕ)ϵI JKX IX JXK } (D.5) = N λ ∫ β 0 dτ δ δXT µν(τh) ∑ ϵI JK ( imxX I ijX J jkXK ki + ig̃ϕijX I jkX J kbXK bi ) (D.6) = N λ ∑ ϵI JK [ + imx ( (δTIδµiδνj)X J jkXK ki + X I ij(δ TJδµjδνk)XK ki + X I ijX J jk(δ TKδµkδνi) ) + ig̃ϕij ( (δTIδµjδνk)X J kbXK bi + X I jk(δ TJδµkδνb)XK bi + X I jkX J kb(δ TKδµbδνi) ) ] (D.7) = N λ ∑ ϵI JK [ imx ( δTIX J νkXK kµ + δTJX I iµXK νi + δTKX I νjX J jµ ) + ig̃ ( δTIϕiµX J νbXK bi + δTJϕijX I jµXK νi + δTKϕνjX I jkX J kµ ) ] (D.8) = N λ ∑ ϵI JK [ imx ( δTIX J νkXK kµ + δTJXK νiX I iµ + δTKX I νjX J jµ ) + ig̃ ( δTIX J νbXK biϕiµ + δTJXK νiϕijX I jµ + δTKϕνjX I jkX J kµ ) ] . (D.9) Here we can see that all the terms multiplied by mX are just cyclic permutations of the SO(3) indices, which means they are equal. We can then rewrite the result as (also renaming the SO(3) indices in the other therms): δSI δXT µν(τh) = N λ ∑ iϵI JKδTI [ 3mx ( X J νkXK kµ ) + g̃ ( X J νbXK biϕiµ + X J νiϕijXK jµ + ϕνjX J jkXK kµ ) ] . (D.10) Bibliography [1] Juan Maldacena. The large-n limit of superconformal field theories and supergravity. International journal of theoretical physics, 38(4):1113–1133, 1999. [2] Juan Maldacena. Eternal black holes in anti-de sitter. Journal of High Energy Physics, 2003(04):021, 2003. [3] Shinsei Ryu and Tadashi Takayanagi. Holographic derivation of entangle- ment entropy from the anti–de sitter space/conformal field theory correspon- dence. Physical review letters, 96(18):181602, 2006. [4] Mark Van Raamsdonk. Comments on quantum gravity and entanglement. arXiv preprint arXiv:0907.2939, 2009. [5] Mark Van Raamsdonk. Building up space–time with quantum entanglement. International Journal of Modern Physics D, 19(14):2429–2435, 2010. [6] Juan Maldacena and Leonard Susskind. Cool horizons for entangled black holes. Fortschritte der Physik, 61(9):781–811, 2013. [7] Brian Swingle and Mark Van Raamsdonk. Universality of gravity from entanglement. arXiv preprint arXiv:1405.2933, 2014. [8] Steve Carlip. Black hole thermodynamics. International Journal of Modern Physics D, 23(11):1430023, 2014. [9] Leonard Susskind. Butterflies on the stretched horizon. arXiv preprint arXiv:1311.7379, 2013. [10] Leonard Susskind. Computational complexity and black hole horizons. Fortschritte der Physik, 64(1):24–43, 2016. [11] Douglas Stanford and Leonard Susskind. Complexity and shock wave ge- ometries. Physical Review D, 90(12):126007, 2014. [12] Leonard Susskind. Entanglement is not enough. Fortschritte der Physik, 64(1):49–71, 2016. 56 BIBLIOGRAPHY 57 [13] Adam R Brown, Daniel A Roberts, Leonard Susskind, Brian Swingle, and Ying Zhao. Holographic complexity equals bulk action? Physical review letters, 116(19):191301, 2016. [14] Adam R Brown, Daniel A Roberts, Leonard Susskind, Brian Swingle, and Ying Zhao. Complexity, action, and black holes. Physical Review D, 93(8):086006, 2016. [15] Dean Carmi, Robert C Myers, and Pratik Rath. Comments on holographic complexity. Journal of High Energy Physics, 2017(3):1–44, 2017. [16] Alexandre Belin, Robert C Myers, Shan-Ming Ruan, Gábor Sárosi, and Antony J Speranza. Does complexity equal anything? Physical Review Letters, 128(8):081602, 2022. [17] Alexandre Belin, Robert C Myers, Shan-Ming Ruan, Gábor Sárosi, and Antony J Speranza. Complexity equals anything ii. Journal of High Energy Physics, 2023(1):1–79, 2023. [18] Yasuhiro Sekino and Leonard Susskind. Fast scramblers. Journal of High Energy Physics, 2008(10):065, 2008. [19] Juan Maldacena, Stephen H Shenker, and Douglas Stanford. A bound on chaos. Journal of High Energy Physics, 2016(8):1–17, 2016. [20] Ru Ling, Hao Xu, and Yen Chin Ong. How anti-de sitter black holes reach thermal equilibrium. Physics Letters B, 826:136896, 2022. [21] Norihiro Iizuka and Mitsuhiro Nishida. Krylov complexity in the ip matrix model. Journal of High Energy Physics, 2023(11):1–46, 2023. [22] Juan Maldacena. A simple quantum system that describes a black hole. arXiv preprint arXiv:2303.11534, 2023. [23] Michael A Nielsen and Isaac L Chuang. Quantum computation and quantum information. Cambridge university press, 2010. [24] Shira Chapman and Giuseppe Policastro. Quantum computational complex- ity from quantum information to black holes and back. The European Physical Journal C, 82(2):128, 2022. BIBLIOGRAPHY 58 [25] Adam R Brown and Leonard Susskind. Complexity geometry of a single qubit. Physical Review D, 100(4):046020, 2019. [26] Reginald J Caginalp and Samuel Leutheusser. Complexity in one-and two- qubit systems. arXiv preprint arXiv:2010.15099, 2020. [27] Michael A Nielsen. A geometric approach to quantum circuit lower bounds. arXiv preprint quant-ph/0502070, 2005. [28] Luca D’Alessio, Yariv Kafri, Anatoli Polkovnikov, and Marcos Rigol. From quantum chaos and eigenstate thermalization to statistical mechanics and thermodynamics. Advances in Physics, 65(3):239–362, 2016. [29] Adam R Brown, Leonard Susskind, and Ying Zhao. Quantum complexity and negative curvature. Physical Review D, 95(4):045010, 2017. [30] Adam R Brown and Leonard Susskind. Second law of quantum complexity. Physical Review D, 97(8):086015, 2018. [31] Daniel A Roberts, Douglas Stanford, and Alexandre Streicher. Operator growth in the syk model. Journal of High Energy Physics, 2018(6):1–20, 2018. [32] Daniel E Parker, Xiangyu Cao, Alexander Avdoshkin, Thomas Scaffidi, and Ehud Altman. A universal operator growth hypothesis. Physical Review X, 9(4):041017, 2019. [33] Leonard Susskind. Three lectures on complexity and black holes. Technical report, Springer, 2020. [34] Richard P Feynman. Simulating physics with computers. In Feynman and computation, pages 133–153. CRC Press, 2018. [35] Howard Georgi. Lie algebras in particle physics: from isospin to unified theories. Taylor & Francis, 2000. [36] Hans J Weber and George B Arfken. Essential mathematical methods for physi- cists, ISE. Elsevier, 2003. [37] Vijay Balasubramanian, Matthew DeCross, Arjun Kar, and Onkar Parrikar. Quantum complexity of time evolution with chaotic hamiltonians. Journal of High Energy Physics, 2020(1):1–44, 2020. BIBLIOGRAPHY 59 [38] Fritz Haake. Quantum signatures of chaos. Springer, 1991. [39] Tibra Ali, Arpan Bhattacharyya, S Shajidul Haque, Eugene H Kim, Nathan Moynihan, and Jeff Murugan. Chaos and complexity in quantum mechanics. Physical Review D, 101(2):026021, 2020. [40] Pawel Caputa, Javier M Magan, and Dimitrios Patramanis. Geometry of krylov complexity. Physical Review Research, 4(1):013041, 2022. [41] Vijay Balasubramanian, Pawel Caputa, Javier M Magan, and Qingyue Wu. Quantum chaos and the complexity of spread of states. Physical Review D, 106(4):046007, 2022. [42] Brian C Hall and Brian C Hall. Lie groups, Lie algebras, and representations. Springer, 2013. [43] Nadir Jeevanjee. An introduction to tensors and group theory for physicists. Springer, 2011. [44] Michel Le Bellac. Thermal field theory. Cambridge university press, 2000. [45] Mikko Laine and Aleksi Vuorinen. Basics of thermal field theory. Lect. Notes Phys, 925(1):1701–01554, 2016. [46] Horatiu Nastase. Introduction to Quantum Field Theory. Cambridge University Press, 2019. [47] Matthew D Schwartz. Quantum field theory and the standard model. Cambridge university press, 2014. [48] Claude Itzykson and Jean-Bernard Zuber. Quantum field theory. Courier Corporation, 2006. Contents 1 Introduction 2 Quantum Computational Complexity 2.1 Basic Concepts 2.2 Circuit Complexity 2.3 Geometric Complexity 3 Complexity in Time Evolution 3.1 Thermalization in Quantum Mechanics 3.2 Scrambling and Operator Growth 3.3 Complexity and Thermalization in k-Local Fast Scramblers 3.4 Chaos and Complexity 4 Krylov Complexity 4.1 Spread Complexity 4.2 Krylov Complexity in the Space of Operators 4.3 Finite Temperature 4.4 The Moment Method 4.5 A Bound on Operator Growth 5 The Toy Model 6 Dyson-Schwinger Equations for the Toy Model 6.1 From Action to Complexity 6.2 Algebraic Derivation of a Dyson-Schwinger Equation 6.3 Diagrammatics 7 Conclusion and Perspectives A Conventions B Shannon Entropy C Extracting a Two-Point Correlator From Its Time-Ordered Counterpart D Functional Derivative of the Euclidean Action With Respect to X D.1 Functional Derivative of the Free Part D.2 Functional Derivative of the Interacting Part Bibliography