SciELO - Scientific Electronic Library Online

 
vol.69 número2Topologically nontrivial phase in Na2CuX (X= As, Sb, Sn and Bi) full Heusler compounds: Insights from DFT-based computer simulationHadronic contribution to the running QED coupling at the Z-boson mass scale índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Revista mexicana de física

versión impresa ISSN 0035-001X

Rev. mex. fis. vol.69 no.2 México mar./abr. 2023  Epub 05-Nov-2024

https://doi.org/10.31349/revmexfis.69.020502 

Condensed Matter

Artificial neural network for the single-particle localization problem in quasiperiodic one-dimensional lattices

G. A. Domínguez-Castro* 

R. Paredes* 

* Instituto de Física, Universidad Nacional Autónoma de México, Apartado Postal 20-364, México D.F. 01000, Mexico. e-mail: gustavodomin@estudiantes.fisica.unam.mx


Abstract

The use of machine learning algorithms to address classification problems in several scientific branches has increased over the past years. In particular, the supervised learning technique with artificial neural networks has been successfully employed in classifying phases of matter. In this article, we use a fully connected feed-forward neural network to classify extended and localized single-particle states that arise from quasiperiodic one-dimensional lattices. We demonstrate that our neural network achieves to correctly uncover the nature of the single-particle states even when the wave functions come from a more complex Hamiltonian than the one used to train the network.

Keywords: Algorithms; Hamiltonian; single-particle states

1 Introduction

Ever since the seminal work of Landau [1], the study of phases and continuous phase transitions via an order parameter has been a fundamental paradigm in condensed matter physics. In the Landau scheme, the purpose of an order parameter is to signal where the phase of a certain system breaks a given symmetry of the underlying microscopic Hamiltonian [2]. This process in which the ground state of a physical system ends with a lower number of symmetries than the original Hamiltonian has been called spontaneous symmetry breaking [2]. A plethora of phases of matter such as crystals [3], magnets [4,5], and conventional superconductors [3] can be identified by the spontaneous symmetry breaking mechanism. However, not all phases of matter can be classified by an order parameter. We refer to those that are recognized by another attribute. For instance, the many-body localization transition which manifests itself through a change in the entanglement dynamics [6,7], the BEC-BCS crossover that can be detected by the decay of the correlation functions [8,9], and the so-called topological phases of matter [10,11] that are distinguished by the evaluation of topological invariants such as the Chern number [12].

Although the conventional and non-conventional phases of matter can not be characterized by the same theoretical scheme, machine learning techniques offer the possibility of classifying them by using different algorithms and procedures [13,14,15,16]. In fact, machine learning has emerged as a powerful tool to classify and identify phases of matter. For instance, it has been used to predict crystal structures [17], solve impurity problems [18], and classify thermal and quantum phases of matter [19,20,21,22,23]. More recently, recurrent neural networks have been employed to build variational wave functions for quantum many-body problems [24], and convolutional neural networks have been used to distinguish the dynamics of an Anderson insulator from a many-body localized phase [25].

In this manuscript, we use machine learning techniques to address the problem of single-particle localization in one-dimensional quasiperiodic lattices with both, nearest neighbor and next-nearest neighbor tunneling. In particular, using supervised learning, where the learner needs to be trained with previously classified data, we demonstrate the efficiency of an artificial neural network for addressing the classification of extended and localized wave functions. For this purpose, we first train the neural network (NN) using the eigenstates obtained from the exact diagonalization of the well-known Aubry-André (AA) model [26]. It has been recognized that this model is a suitable one to identify how single-particle localization emerges as a result of correlated disorder in a lattice [26,27]. In contrast to the one-dimensional Anderson model [28], where any strength of the uncorrelated disorder yields the exponential localization of the single-particle eigenstates, in the AA model there is a threshold in the correlated disorder that signals the transition between extended and localized single-particle eigenstates. To avoid confusion with the uncorrelated or random disorder, we shall use quasidisorder to indicate the correlated disorder introduced in the AA model. After the training procedure, we probe the performance of the neural network by classifying eigenstates belonging to a particular generalization of the Aubry-André model that includes next-nearest neighbor tunneling. Using the inverse participation ratio (IPR), we demonstrate that the NN classifies above 96% of the profiles correctly. Our results are of relevance in the study of disordered systems with machine learning techniques and can serve as a benchmark for further theoretical studies.

The manuscript is organized as follows, in Sec. 2, we introduce the two models where the machine learning technique is accomplished, these models are the Aubry-André and the Extended Aubry-André. Both of them represent a quasiperiodic lattice in one dimension. Section 3 displays the theoretical tools used to probe the performance of the neural network. The architecture and physical parameters of the network are also exposed in Sec. 3. The results of the classification task are shown in Sec. 4. Finally, in Sec. 5, we discuss and summarize our findings.

2 Model

To study the localization phenomenon in quasiperiodic lattices through a neural network, we first consider the well-known Aubry-André model [26] on a lattice having L sites with periodic boundary conditions. The Hamiltonian of the AA model is:

H^AA=-J1i,jc^ic^j+Δicos(2πβi+ϕ)n^i, (1)

where c^i(c^i) is the annihilation (creation) operator at site i,n^i=c^ic^i is the corresponding particle number operator, and J1 is the nearest-neighbor tunneling amplitude. The quasidisorder is characterized by its strength Δ, an incommensurable parameter β=(5-1)/2, and a random phase ϕ[0,2π). A given value of ϕ leads to a particular realization of the quasidisorder. However, one is always interested in extracting the main effects of disordered media, independently of how the disorder is distributed on the lattice. Therefore one has to average over an ensemble of realizations, namely to consider different values of ϕ. As it is well-known, all single-particle states of the Aubry-André model are extended for Δ/J1<2, localized when Δ/J1>2, while multifractal at the transition point Δ/J1=2[29]. That is to say, that for a given value of Δ/J1, the AA model does not display a mixture between localized and extended eigenstates on the same spectrum. A natural extension of the Aubry-André model arises when tunneling to next-nearest-neighbors is included. In such a case, the Hamiltonian of the so-called Extended Aubry-André (EAA) model is the following:

H^EAA=H^AA-J2i,jc^ic^j, (2)

being J2 the next-nearest-neighbor tunneling amplitude. The notation i,j indicates that the sum runs over next-nearest-neighbor sites. In contrast to the AA model, the EAA model can exhibit both, localized and extended eigenstates on the same spectrum. In other words, it emerges an energy value, called mobility edge, that separates extended states from localized states [30]. As we shall see, our neural network is capable of identifying the mobility edge even though it was trained with data belonging to the Aubry-André model.

3 Methods

3.1 Localization tools

Before proceeding to the description of the neural network, we introduce an important and widely used physical quantity that is a footprint of the localization transition. This parameter, called the inverse participation ratio (IPR), gives a measure of the inverse of lattice sites where the wave function has a non-negligible amplitude. For a normalized state |ψ=i=1Lψ(i)|i, its inverse participation ratio is defined as follows:

IPRψ=i=1L|ψ(i)|4, (3)

where ψ(i) is the probability amplitude of the state |ψ at site i. The IPR vanishes for spatially extended states while remaining finite for localized states. In Fig. 1, we illustrate the inverse participation ratio IPR0 associated with the ground state of the AA model as a function of the quasidisorder strength Δ/J1. We average over ten realizations of the random phase ϕ[0,2π). From Fig. 1, one can notice that the IPR0 becomes different from zero for Δ/J1>2 and it approaches unity as the quasidisorder increases. This peculiar behavior makes the IPR a suitable parameter to test the performance of the neural network.

Figure 1 The ground state IPR of the AA model as a function of the quasidisorder strength Δ/J1 for a lattice with L = 233 sites. We average over ten realizations of the phase ϕ. The vertical dashed line indicates the critical quasidisorden Δc/J=2

The definition for the IPR in Eq. (3) is related to a single state |ψ. However, when one is interested in the typical value of the IPR in the whole spectrum of eigenstates, it is useful to calculate the average of the IPR over the eigenstates. That is, for a given value of Δ/J1 and a given realization associated with the phase ϕ[0,2π), we define the average IPR¯ as the mean of the inverse participation ratio of each eigenfunction resulting from the diagonalization procedure:

IPR¯=1Lj=1LIPRj=1Lj=1Li=1L|ψj(i)|4, (4)

where the subscript j indicates the j-th eigenstate (ordered according to the energy value from lowest to highest), and the index i signals the lattice site. The average inverse participation ratio is a measure of the amount of extended or localized states in the whole spectrum [27]. As we shall see, both the inverse participation ratio IPRψ and the average inverse participation ratio IPR¯ allow us to monitor the performance of the neural network.

3.2 Data preparation

Like any machine learning implementation, the first task is to provide the raw data that will feed the learning algorithm [14,15]. For this purpose, we perform numerical exact diagonalization to obtain the eigenstates of the Hamiltonian in Eq. (1) for 42 evenly spaced values of Δ/J1 over the interval [0,4], and 40 realizations of the random phase ϕ for each value of the quasidisorder strength Δ/J1. Since the aim of this manuscript is the classification of extended and localized wave functions, we avoid the value associated with the critical quasidisorder strength Δc/J1=2 within the interval [0,4], since as stated above, at the critical quasidisorder, the wave functions are neither localized nor extended but exhibit multifractality [29]. In order to obtain a broad classification, we kept not only the ground state but all the L eigenstates resulting from the diagonalization procedure. Each wave function is collected as a row vector ψj=(ψj(1)ψj(L)) into a Ω×L matrix:

Ψ=ψ1(1)ψ1(2)ψ1(L)ψΩ(1)ψΩ(2)ψΩ(L)Ω×L, (5)

where Ω=L×40×42 is the total number of stored wave functions. The supervised learning algorithm needs to be trained and tested with previously classified data. To carry out this demand, in addition to the matrix Ψ, it is necessary to provide a tag that allows to distinguish between localized and extended states. This can be done by introducing a matrix V of size 2×Ω, the j-th column of V indicates whether the wave function ψj is localized or extended using the one-hot encoding [15]. That is:

10  ψj  isextended01  ψj  islocalized (6)

The matrices Ψ and V are the unique data needed to train and test the network. To ensure that a similar number of localized and extended states are used in the NN training procedure, we randomly shuffle the rows of the matrix Ψ and order the columns of the matrix V accordingly. Shuffling the data is generally considered good practice since it prevents any bias during the training procedure [14,15]. After shuffling the rows of Ψ, we divide the 100% of the dataset composing the matrices Ψ and V into partitions of 80 % and 20 %, the former set is used to feed the training procedure while the purpose of the latter set is to verify the efficiency of the neural network. As is common in the machine learning literature, we call the largest set the training set and the smallest set the test set.

3.3 Artificial neural network architecture and training procedure

Artificial neural networks are nonlinear models which are used for supervised learning, the structure and architecture of NNs are originally inspired by biological neural networks [15]. An artificial neural network contains several layers of interconnected nodes. These nodes, called neurons, are the basic units of a neural network. Typically, the first layer of neurons is called the input layer, the middle layers “hidden layers" and the last layer is called the output layer. To perform supervised learning on the data Ψ, we design fully connected feed-forward neural networks. A depiction of the neural networks considered is shown in Fig. 2. Essentially, the networks consist of an input layer with L neurons, a single hidden layer with Lh neurons, and an output layer with two neurons Ext and Loc. Each neuron of the output layer corresponds to one of the two possible outcomes, namely, an extended or localized wave function. Notice that the number of neurons in the input layer is fixed by the number of lattice sites considered. In contrast, the number of neurons in the hidden layer is fine-tuned to the value that gives the highest precision. That is, a reasonable range of values of Lh is proposed, the neural network corresponding to each proposed value is trained, and the Lh that yields the highest accuracy when classifying the wave functions belonging to the test set is chosen.

Figure 2 Schematic representation of the artificial neural networks considered. The network consists of an input layer with L neurons, a single hidden layer with Lh neurons, and an output layer which contains two neurons Ext and Loc, each of which corresponds to one of the two possible results, namely, an extended or localized wave function. 

We now describe the full action of the network on the data. The feed-forward attribute of the NN means that the flow of data is from left to right, with the output of one layer serving as the input for the next. In the first layer, a given input vector ψ=(ψ(1)ψ(L)) of dimension L is mapped into a vector a(2) of dimension Lh via an affine linear transformation (Θ,b(1)) followed by the application of a function g1, called the activation function:

ak(2)=g1lΘkl(1)ψ(l)+bk(1), (7)

where Θ(1) is a Lh×L matrix, and both, b(1) and a(2) are vectors of size Lh. The passage from the hidden layer to the output layer of the outcome a(2)=(a1(2)aLh(2)) is accompanied by the application of both, an affine transformation (Θ(2),b(2)) and an activation function g2:

yk=g2lΘkl(2)al(2)+bk(2). (8)

Here, Θ(2) is a 2×Lh matrix, and b(2) as well as y are vectors with two entries. Each of the two entries of the output vector y corresponds to the value of the neurons Ext and Loc, respectively. The elements of the matrices Θ(1) and Θ(2) are called weights, and the entries of the vectors b(1) and b(2) are referred to as biases. The weights and biases parametrize the nonlinear model implemented by the network. As shown in Eqs. (7) and (8), the passage of the data from one layer to another requires two activation functions g1 and g2, these functions help the neural network to learn complex patterns in the data [14,15]. We choose a rectified linear unit (ReLU) function as the activation function of the input layer, whereas a normalized exponential (softmax) activation function is used in the output layer:

g1(x)=max(0,x)g2(xk)=exkk=1Kexk, (9)

where K is the number of classes that the NN has to classify. As pointed out, in this manuscript K = 2, extended and localized profiles. Like all supervised learning procedures, a loss function must be specified, this function, denoted by J, quantifies the precision of the NN and has to be minimized with respect to all weights and biases in order to optimize the neural network classification [15]. We employ a cross-entropy cost function supplemented with L2 regularization to prevent overfitting [14,15]. The cost function can be written as follows:

J(Θ(1),Θ(2),b(1),b(2))=1Ωi=1Ωk=12[Viklog(yk(i))+(1-Vik)log(1-yk(i))+]λ2Ωl=12i,j|Θij(l)|2, (10)

where V is the matrix of tags and λ is the regularization parameter. Similarly to the number of neurons in the hidden layer, the value of λ has to be tuned to improve the performance. In Eq. (10), we denote yk(i) as the outcome of the k-th neuron in the output layer when classifying the wave vector ψi. Notice that the first term in Eq. (10) depends implicitly on the weights and biases due to the presence of yk(i). The minimization of the cost function is implemented by using the Adam optimization algorithm [31]. To carry out all the above recipes we employ the TensorFlow software library [32]. In the following section, we shall evaluate the performance of the NN with the test set.

4 Results

4.1 Testing set

After cost function minimization, the neural network performance is analyzed using previously unseen data, that is, the data belonging to the test set. In Fig. 3a) we show the average test accuracy of the output layer as a function of the quasidisorder strength Δ/J1 for several system sizes L. One can notice two facts, the first one is that the region in which the neural network makes most inaccuracies corresponds to a neighborhood close to the transition point Δ/J1=2. The second important issue is that the accuracy in classifying the wave functions improves as the system size is increased. This leads to the conclusion that the main source of errors in the classification made by the neural network is due to finite size errors.

Figure 3 a) Average test accuracy of the output layer as a function of the quasidisorder strength Δ/J1 for system sizes L = 55, 89, 144 and 233. b) Average output layer outcome as a function of the quasidisorder strength Δ/J1 for L = 233 In both panels a) and b), the orange line signals the transition point Δ/J1=2

Figure 3b) illustrates the average output layer outcome as a function of the quasidisorder strength Δ/J1 for the largest system size considered L = 233. The red and blue curves correspond to the outcome of the neurons Loc and Ext of the output layer, respectively. The NN has no previous information about the Hamiltonian or the distribution of the spatial disorder, nevertheless, the critical disorder estimated by the crossing point of the extended and localized curves is Δ/J1=2.05 which disagrees by less of 3% of the actual value Δc/J1=2.0. In the following, we shall consider only the neural network for the lattice with L = 233 sites. A summary of the number of parameters in this neural network is show in Table 1.

Table I Number of neurons and total parameters used in the neural network for the lattice with L = 233 sites. 

Layer Number of neurons
Input layer 233
Hidden layer 32
Output layer 2
Total number of parameters: 7554.

4.2 Extended Aubry-André model

To go beyond the test set, we probe the neural network performance on wave functions of the extended Aubry-André model in Eq. (2). In contrast to the AA Hamiltonian, the EAA Hamiltonian includes, as stated above, tunneling to next-nearest neighbors which gives rise to the emergence of mobility edges [30]. We should point out here that the network was only trained with data generated from the AA model. Thus, the eigenstates belonging to the EAA model are new for the network. In Fig. 4, we show the average output layer outcomes and the IPR of each eigenvector in the spectrum for (J2/J1,Δ/J1)=(0.2,1.5) (Fig. 4a)), (J2/J1,Δ/J1)=(0.4,3.0) (Fig. 4b)), and (J2/J1,Δ/J1)=(0.5,2.6) (Fig. 4c)). These results correspond to the average over 20 realizations of the random phase ϕ[0,2π). Considering more phases, that is, increasing the number of realizations, would reduce the dispersion of the outcome of the neurons but without changing which neuron gives the largest result. The black arrow in each panel of Fig. 4 indicates the eigenstate number at which the mobility edge takes place. That is, where the wave functions change from extended to localized or vice versa. Surprisingly the NN is able to classify well the nature of the wave functions even when the spectrum has an extended-localized-extended structure as shown in Fig. 4c). It is interesting to note that although neither the Ext nor Loc neurons of the output layer are standard parameters to test localization as the IPR, those outputs allow to recognize the extended-localized transition. Hence, new parameters that diagnose the nature of the wave functions are produced during the training of the network. The abrupt change and roughly constant behavior of the Ext and Loc neurons show that both quantities are more perceptible to the spatial nature of the wave functions, this characteristic may be advantageous over the IPR when generalizing the classification task to multifractal states.

Figure 4 Average output layer outcomes of the neural network and the IPR of each eigenvector in the spectrum for pair of values of (J2/J1,Δ/J1). a) (J2/J1,Δ/J1)=(0.2,1.5), b) (J2/J1,Δ/J1)=(0.4,3.0), and c) (J2/J1,Δ/J1)=(0.5,2.6). Each curve represents the average over 20 realizations of the random phase ϕ. The black arrow in each panel indicates the eigenstate number at which the mobility edge takes place. 

Now, we turn to the classification of the resulting wave functions of the EAA model when we vary the quasidisorder strength Δ/J1 for a fixed J2. In Fig. 5 we show in a density color scheme the IPR of all the eigenvectors obtained from the diagonalization procedure of Hamiltonian in Eq. (2) as a function of Δ/J1 for a fixed J2. In particular, we consider J2/J1=0.1, J2/J1=0.2, J2/J1=0.3, and J2/J1=0.4 for Figs. 5a)-d), respectively. The results were obtained after taking an average of 20 random phases ϕ[0,2π). The orange curve signals the transition from extended to localized wave functions estimated by the neural network. As one can see from Fig. 5, the decision boundary determined by the NN is in agreement with the extended-localized transition forecasted by the IPR. In other words, the NN captures the nature of all the wave functions in the spectrum of the EAA model.

Figure 5 Average IPR of the resulting eigenstates of the EAA model as a function of Δ/J1 for fixed J2/J1. a) J2/J1=0.1, b) J2/J1=0.2, (c) J2/J1=0.3, and d) J2/J1=0.4. The orange curve signals the transition boundary from extended to localized states forecasted by the neural network. We consider 20 realizations of the random phase ϕ

To conclude this study we concentrate now on the average IPR¯. This quantity gives a measure of the amount of extended or localized states in a given spectrum. In Fig. 6, we show the average IPR¯ (Fig. 6a)) and the value of the neuron Ext in the output layer (Fig. 6b)) as a function of both, next-nearest-neighbor tunneling amplitude J2/J1 and quasidisorder strength Δ/J1. In addition to the average over the wave functions composing the spectrum, we averaged over 20 random realizations of ϕ[0,2π). One can notice that Figs. 6a) and 6b) shares a very similar structure, meaning that the NN achieves to capture the nature of the wave functions in the whole space of parameters (Δ/J1,J2/J1) of the EAA model.

Figure 6 a) Average IPR of the eigenfunctions of the EAA model as a function of next-nearest-neighbor tunneling amplitude J2/J1 and quasidisorder strength Δ/J1. b) Value of the neuron Ext in the output layer as a function of J2/J1 and Δ/J1

5 Conclusion

In this manuscript, we have illustrated the capacity of an artificial neural network to classify extended and localized single-particle states that arise in quasiperiodic one-dimensional lattices. In particular, we first train and test the artificial neural network using eigenstates belonging to the celebrated Aubry-André (AA) model. By collecting not just the ground state bul all eigenstates, we accomplish an excellente classification in both, the low- and high-energy sectors of the model. Then, we demonstrate the versatility of the network by probing its performance on the eigenstates of the Extended Aubry-André (EAA) model. Our results show that the neural network does not learn the IPR parameter, since quantitatively speaking the IPR and the output layer values do not match. This means that new parameters that sense the localization are conceived by the network. Surprisingly, the performance of the neural network is satisfactory since it classifies above 96% of the profiles correctly. We found that misclassified states are mainly due to finite size effects close to the localized-extended transition.

The study here addressed shows the efficiency and capacity of a neural network to classify profiles that come from a more complex model than the one used to train the NN. Although our analysis focuses on one-dimensional models with nearest neighbor and next neighbor hopping, supervised learning with neural networks can also be used to analyze the localization phenomena in higher dimensions and in lattices with power-law hopping, where the peculiar multifractal states arise. The classification of extended and localized single-particle states through neural networks provides a useful benchmark to tackle the many-body localization problem using supervised learning techniques. Diagnosing many-body phases of matter requires, in addition to fully connected neural networks, the use of convolutional neural networks or principal component analysis to deal with the exponential dimension of quantum many-body states [25].

Acknowledgments

This work was partially funded by Grant No. IN108620 from DGAPA (UNAM). G.A.D.-C acknowledges a CONACYT scholarship.

References

1. L. D. Landau, On the theory of phase transitions, Zh. Eksp. Teor. Fiz. 7 (1937) 19-32. [ Links ]

2. P. Coleman, Introduction to Many-Body Physics, (Cambridge University Press, Cambridge, 2015). [ Links ]

3. G. Mahan, Many-Particle Physics, (Kluwer Academic/Plenum Publishers, New York, 2000). [ Links ]

4. A. Auerbach, Interacting Electrons and Quantum Magnetism, (Springer-Verlag, New York, 1994). [ Links ]

5. N. Majlis, Quantum Theory of Magnetism, (World Scientific, Singapur, 2000). [ Links ]

6. D. A. Abanin, E. Altman, I. Bloch, and M. Serbyn, Colloquium: Many-body localization, thermalization, and entanglement, Rev. Mod. Phys. 91 (2019) 021001. https://doi.org/10.1103/RevModPhys.91.021001. [ Links ]

7. A. Lukin, M. Rispoli, R. Schittko, M. E. Tai, A. M. Kaufman, S. Choi, V. Khemani, J. Léonard, and M. Greiner, Probing entanglement in a many-body-localized system, Science 364 (2019) 256-260. https://doi.org/10.1126/science.aau0818. [ Links ]

8. W. Zwerger, The BCS-BEC Crossover and the Unitary Fermi Gas, (Springer-Verlag, Heidelberg, 2012). [ Links ]

9. J. C. Obeso-Jureidini and V. Romero-Rochín, Spatial structure of the pair wave function and the density correlation functions throughout the BEC-BCS crossover, Phys. Rev. A 101 (2020) 033619. https://doi.org/10.1103/PhysRevA.101.033619. [ Links ]

10. D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Quantized Hall conductance in a two-dimensional periodic potential, Phys. Rev. Lett. 49 (1982) 405. https://doi.org/10.1103/PhysRevLett.49.405. [ Links ]

11. F. D. M. Haldane, Nobel lecture: topological quantum matter, Rev. Mod. Phys. 89 (2017) 040502. https://doi.org/10.1103/RevModPhys.89.040502. [ Links ]

12. J. Cayssol and J. N. Fuchs, Topological and geometrical aspects of band theory, J. Phys. Mater. 4 (2021) 0340007. https://doi.org/10.1088/2515-7639/abf0b5. [ Links ]

13. J. Carrasquilla, Machine learning for quantum matter, Advances in Physics: X 5 (2020) 1797528. https://doi.org/10.1080/23746149.2020.1797528. [ Links ]

14. P. Mehta, M. Bukov, Ching-Hao, A. G. R. Day, C. Richardson, C. K. Fisher, and D. J. Schwab, A high-bias, low variance introduction to machine learning for physicists, Physics Reports 810 (2019) 1-124. https://doi.org/10.1016/j.physrep.2019.03.001. [ Links ]

15. J. Krohn, Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence, (Addison-Wesley Professional, Massachusetts, 2019). [ Links ]

16. J. Carrasquilla and G. Torlai, How to use neural networks to investigate quantum many-body physics, PRX Quantum 2 (2021) 040201. https://doi.org/10.1103/PRXQuantum.2.040201. [ Links ]

17. S. Curtarolo, D. Morgan, K. Persson, J. Rodgers, and G. Ceder, Predicting crystal structures with data mining of quantum calculations, Phys. Rev. Lett. 91 (2003) 135503. https://doi.org/10.1103/PhysRevLett.91.135503. [ Links ]

18. L.-F. Arsenault, A. Lopez-Bezanilla, O. A. von Lilienfeld, and A. Millis, Machine learning for many-body physics: the case of the Anderson impurity model, Phys. Rev. B 90 (2014) 155136. https://doi.org/10.1103/PhysRevB.90.155136. [ Links ]

19. J. Carrasquilla and R. G. Melko, Machine learning phases of matter, Nature Physics 13 (2017) 431-434. https://doi.org/10.1038/nphys4035. [ Links ]

20. O. S. Ovchinnikov, S. Jesse, P. Bintacchit, S. Trolier-McKinstry, and S. V. Kalinin, Disorder identification in hysteresis data: recognition analysis of the random-bond-random-field Ising model, Phys. Rev. Lett. 103 (2009) 157203. https://doi.org/10.1103/PhysRevLett.103.157203. [ Links ]

21. L. Wang, Discovering phase transitions with unsupervised learning, Phys. Rev. B 94 (2016) 195105. https://doi.org/10.1103/PhysRevB.94.195105. [ Links ]

22. K. Ch’ng, J. Carrasquilla, R. G. Melko, and E. Khatami, Machine learning phases of strongly correlated fermions, Phys. Rev. X 7 (2017) 031038. https://doi.org/10.1103/PhysRevX.7.031038. [ Links ]

23. B. S. Rem, N. Käming, M. Tarnowski, L. Asteria, N. Fläschner, C. Becker, K. Sengstock, and C. Weitenberg, Identifying quantum phase transitions using artificial neural networks on experimental data, Nature Physics 15 (2019) 917-920. https://doi.org/10.1038/s41567-019-0554-0. [ Links ]

24. M. Hibat-Allah, M. Ganahl, L. E. Hayward, R. G. Melko, and J. Carrasquilla, Recurrent neural network wave functions, Phys. Rev. Research 2 (2020) 023358. https://doi.org/10.1103/PhysRevResearch.2.023358. [ Links ]

25. F. Kotthoff, F. Pollmann, and G. De Tomasi, Distinguishing an Anderson insulator from a many-body localized phase through space-time snapshots with neural networks, Phys. Rev. B 104 (2021) 224307. https://doi.org/10.1103/PhysRevB.104.224307. [ Links ]

26. S. Aubry and G. André, Analyticity breaking and Anderson localization in incommensurate lattices, Ann. Israel Phys. Soc. 3 (1980) 133. [ Links ]

27. G. A. Domínguez-Castro and R. Paredes, The Aubry-André model as a hobbyhorse for understanding the localization phenomenon, Eur. J. Phys. 40 (2019) 045403. https://doi.org/10.1088/1361-6404/ab1670. [ Links ]

28. P. W. Anderson, Absence of diffusion in certain random lattices, Phys. Rev. 109 (1958) 1492. https://doi.org/10.1103/PhysRev.109.1492. [ Links ]

29. M. Wilkinson, Critical properties of electron eigenstates in incommensurate systems, Proc. R. Soc. Lond. A 391 (1984) 305. https://doi.org/10.1098/rspa.1984.0016. [ Links ]

30. J. Biddle, B. Wang, D. J. Priour, Jr., and S. Das Sarma, Localization in one-dimensional incommensurate lattices beyond the Aubry-André model, Phys. Rev. A 80 (2009) 021603(R). https://doi.org/10.1103/PhysRevA.80.021603. [ Links ]

31. D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, https://arxiv.org/abs/1412.6980. [ Links ]

32. M. Abadi, et al., TensorFlow: Large-scale machine learning on heterogeneous distributed systems, https://arxiv.org/abs/1603.04467. [ Links ]

How to Cite. G. A. Dominguez Castro and R. . Paredes Gutiérrez, “Artificial neural network for the single-particle localization problem in quasiperiodic one-dimensional lattices”, Rev. Mex. Fís., vol. 69, no. 2 Mar-Apr, pp. 020502 1-, Mar. 2023.

Received: February 08, 2022; Accepted: August 29, 2022

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License